The ResourceInUseException error occurs when Terraform attempts to create an EKS cluster with a name that already exists in your AWS account, or when cluster resources are still being deleted. This prevents the cluster creation from proceeding until the conflicting resource is resolved.
AWS EKS has rejected the cluster creation request because a resource with the same name already exists in your account and region. This typically happens when: 1. A cluster with the same name exists but is in a deleting state 2. Previous Terraform runs created resources that weren't properly cleaned up 3. The cluster name conflicts with an existing cluster in your AWS account 4. Access entries or node groups with the same names already exist on the cluster AWS uses cluster names as unique identifiers within a region, so each cluster must have a unique name.
First, check if a cluster with the same name actually exists in your AWS account:
aws eks list-clusters --region us-east-1Replace us-east-1 with your target region. If your cluster name appears in the list, the cluster exists. Check the cluster status - if it shows "DELETING", you must wait until it completes deletion (typically 15-30 minutes) before trying to create a new cluster with the same name.
Verify your Terraform state hasn't diverged from actual AWS resources:
terraform refresh
terraform planThis will sync your local state with AWS and show what Terraform thinks should exist. If the plan shows the cluster being created but it actually exists in AWS, your state is out of sync.
If the cluster exists in AWS but not in your Terraform state, import it:
terraform import aws_eks_cluster.main my-cluster-nameReplace aws_eks_cluster.main with your actual resource address and my-cluster-name with your cluster name. This adds the existing AWS resource to your Terraform state, preventing duplicate creation attempts.
If you recently deleted a cluster, AWS needs time to fully clean up all associated resources (security groups, networking, IAM roles). Wait 15-30 minutes after deletion completes before attempting to create a new cluster with the same name:
aws eks describe-cluster --name my-cluster-name --region us-east-1If this returns an error that the cluster doesn't exist, it's safe to create a new one with that name.
If you need to proceed immediately, modify your Terraform configuration to use a different cluster name:
resource "aws_eks_cluster" "main" {
name = "my-cluster-new-v2" # Changed from my-cluster
role_arn = aws_iam_role.cluster.arn
vpc_config {
subnet_ids = var.subnet_ids
}
}After the problematic resources are fully cleaned up, you can rename it back to your original name.
Check AWS CloudFormation for any stuck stacks related to EKS:
aws cloudformation list-stacks --stack-status-filter CREATE_FAILED DELETE_FAILEDLook for any stacks with names matching your cluster. If found, attempt deletion:
aws cloudformation delete-stack --stack-name problematic-stack-nameIf the stack refuses to delete, you may need to delete associated resources manually (security groups, network interfaces, IAM roles) first.
EKS Access Entry Conflicts: When upgrading terraform-aws-eks module versions (e.g., 19.20 to 20.5), migration from ConfigMap-based access to IAM access entries can trigger ResourceInUseException if access entries are partially created. Manually delete conflicting access entries from the cluster first:
aws eks delete-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::ACCOUNT:role/ROLE_NAMEState Lock Issues: If you're using Terraform with a remote state backend (S3 + DynamoDB), ensure no other team member has a lock on the state:
terraform force-unlock LOCK_IDRegion-Specific Names: Remember that cluster names only need to be unique within a region. You can have clusters with the same name in different regions if needed.
Node Group Conflicts: Similar ResourceInUseException errors can occur with managed node groups. If the error mentions 'NodeGroup already exists', follow the same troubleshooting steps but using aws eks delete-nodegroup --cluster-name NAME --nodegroup-name NAME to clean up.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform