The ResourceAlreadyExistsException error occurs when Terraform tries to create an AWS CloudWatch Log Group that already exists. This commonly happens with Lambda functions that auto-create log groups, or when log groups were created outside Terraform.
Terraform maintains a state file tracking all resources it manages. When it attempts to create a CloudWatch Log Group via the aws_cloudwatch_log_group resource, AWS rejects the operation if a log group with that name already exists. This error indicates a mismatch between your Terraform configuration and the actual AWS resourcesโthe log group exists in AWS but isn't tracked in your Terraform state. AWS Lambda functions frequently auto-create log groups on first invocation, which is a common source of this conflict.
Check if the log group actually exists in your AWS account:
aws logs describe-log-groups --log-group-name-prefix "/aws/lambda/" --query 'logGroups[?logGroupName==`/aws/lambda/your-function-name`]'Or use the AWS Console: CloudWatch > Logs > Log Groups and search for your log group name.
If it exists, you'll need to import it into Terraform state or remove it from your configuration.
The recommended solution is to import the existing log group into your Terraform state without destroying it:
terraform import aws_cloudwatch_log_group.example /aws/lambda/my-function-nameThe import path is the exact log group name. For Lambda functions, this is typically /aws/lambda/<function-name>.
After importing successfully, your terraform plan should show no changes and subsequent terraform apply runs will work correctly.
If using Lambda, create the CloudWatch Log Group before the Lambda function and explicitly declare a dependency:
resource "aws_cloudwatch_log_group" "lambda_logs" {
name = "/aws/lambda/my-function"
retention_in_days = 14
}
resource "aws_lambda_function" "example" {
function_name = "my-function"
# ... other configuration ...
depends_on = [aws_cloudwatch_log_group.lambda_logs]
}This ensures Terraform creates the log group before Lambda, preventing AWS from auto-creating it and causing conflicts.
If the log group contains sensitive data you don't want to keep, or if you prefer a fresh start:
aws logs delete-log-group --log-group-name "/aws/lambda/my-function-name"WARNING: This permanently deletes all logs in that group. Only use if you're certain you don't need the logs.
After deletion, run:
terraform applyTerraform will now successfully create the log group.
If other AWS services auto-create log groups (ECS, API Gateway, etc.), you can configure Terraform to skip deletion:
resource "aws_cloudwatch_log_group" "api_gateway" {
name = "/aws/apigateway/my-api"
retention_in_days = 7
skip_destroy = true # Don't delete on terraform destroy
}This prevents Terraform from deleting log groups that might be recreated by the service. However, the log group must still be imported first.
Why Lambda auto-creates log groups: AWS Lambda automatically provisions CloudWatch Log Groups with the naming pattern /aws/lambda/<function-name> when a function executes for the first time. This happens before Terraform can create it, causing conflicts. Solutions include:
1. Always import first: Use terraform import even for new deployments if the service has already run
2. Create resources before services: Define CloudWatch Log Groups before related resources (Lambda, ECS, API Gateway) to prevent auto-creation
3. Use depends_on explicitly: Force Terraform to manage log groups first, preventing race conditions
4. Terraform Cloud workaround: For state lock issues in concurrent deployments, use Terraform Cloud or Terraform Enterprise with state locking
5. skip_destroy consideration: Only use skip_destroy if the log group is created/managed by another system and you want Terraform to leave it alone on destroy
State file synchronization: If you're working in a team, ensure all members use remote state (S3, Terraform Cloud) to prevent state drift where one person's local apply creates resources that aren't tracked in shared state.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform