This error occurs when Terraform attempts to create an AWS Glue job that already exists. The solution typically involves importing the existing job into your Terraform state or renaming your job configuration.
The AlreadyExistsException error occurs when Terraform's aws_glue_job resource tries to create a new Glue job, but a job with that name already exists in your AWS account. This is a state management issue where Terraform doesn't recognize that the resource has already been provisioned, either because it was created outside of Terraform, exists from a previous deployment, or due to a state file inconsistency. AWS Glue job names must be unique within your AWS account. When CreateJob API is called with a name that already exists, AWS Glue rejects the request with AlreadyExistsException. This commonly happens when scaling Infrastructure as Code deployments or migrating existing infrastructure under Terraform management.
Before making changes, confirm that the job actually exists in your AWS account:
1. Navigate to the AWS Glue console
2. Click on "Jobs" in the left sidebar
3. Search for the job name that's causing the error
4. Note the exact job name and any configuration details
This helps you understand whether the job needs to be imported or renamed.
If the job should be managed by Terraform, import it into your state file:
terraform import aws_glue_job.my_job_resource my-job-nameReplace:
- aws_glue_job.my_job_resource with your actual resource reference in Terraform
- my-job-name with the exact name of the existing Glue job in AWS
After importing, run terraform plan to verify the state is now in sync. The plan should show no changes required.
If you need to create a new job with a different configuration, change the job name in your Terraform code:
resource "aws_glue_job" "example" {
name = "my-glue-job-v2" # Change from "my-glue-job" to a unique name
role_arn = aws_iam_role.glue_role.arn
command {
name = "glueetl"
script_location = "s3://my-bucket/scripts/job.py"
}
}Then run:
terraform applyUsing version numbers or timestamps in job names can help maintain uniqueness across deployments.
If the old job is no longer required, delete it from AWS before applying Terraform:
1. Go to the AWS Glue console
2. Select the job in the Jobs list
3. From the Actions menu, choose "Delete job"
4. Type "delete" to confirm
5. Run terraform apply to create the new job
Warning: This will remove the job and any associated run history. Ensure no active workflows depend on it first.
Instead of deleting and recreating, you can manage the existing job with Terraform:
1. Import the job into Terraform state (step 2 above)
2. Adjust your Terraform configuration to match the job's current settings
3. Use terraform apply to update any differing configurations
This approach keeps your job's run history and is safer for production Glue jobs.
After resolving the issue, verify that Terraform state is clean:
terraform planThe output should show "No changes. Your infrastructure matches the configuration." If discrepancies remain, review your resource definitions against what exists in AWS.
For workflows with multiple Glue jobs, repeat the import or rename process for any other conflicting resources.
When working with multiple Terraform environments (dev, staging, prod), use workspace-specific naming or variable interpolation to prevent job name collisions across environments. For example:
resource "aws_glue_job" "example" {
name = "my-job-${terraform.workspace}"
# ... rest of configuration
}If migrating existing infrastructure to Terraform, batch import existing resources before applying new configurations. Use the Terraform import command with a script to automate importing multiple jobs:
for job in $(aws glue list-jobs --query 'JobNames' --output text); do
terraform import "aws_glue_job.$job" "$job"
doneThis approach ensures state consistency across your entire Glue infrastructure.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform