A saved Terraform plan can no longer be applied because the state was modified after the plan was created. Regenerating the plan resolves this issue.
Terraform plans are bound to a specific state snapshot. When you run terraform plan, Terraform captures the current state and creates a plan based on what changes are needed. If the state is modified by another operation before you apply the plan, Terraform detects the mismatch and rejects the application. This safety mechanism prevents applying plans that may be based on outdated infrastructure state, which could cause unexpected resource modifications or conflicts.
Remove the saved plan file from your working directory. This is the quickest solution for local development:
rm tfplanOr if your plan file has a different name:
rm [plan-filename]This prevents Terraform from attempting to use outdated plan data.
Create a fresh plan based on the current state:
terraform plan -out=tfplanThis reads the latest state and compares it against your current configuration. Any changes since the previous plan will be captured in the new plan file.
Apply the newly generated plan:
terraform apply tfplanSince this plan was just created, it matches the current state and will apply successfully.
If you're in a CI/CD pipeline and the state has changed between stages, use -reconfigure to force Terraform to detect the current state:
terraform init
terraform plan -out=tfplan
terraform apply tfplanAlways run terraform init in CI/CD pipelines to ensure plugin cache and backend configuration are properly set up before planning.
To understand what caused the state to change, compare the old and new plans:
terraform show -no-color tfplan.old > tfplan.old.txt
terraform plan -no-color > tfplan.new.txt
diff tfplan.old.txt tfplan.new.txtThis shows you what infrastructure differences Terraform detected between plan generations.
Data Source Drift: Some data sources generate new values on every apply (like random providers or timestamp functions). These cause the state serial to increment even though no resources changed. To debug this, check your terraform logs with TF_LOG=DEBUG.
CI/CD Pipeline Considerations: In multi-stage pipelines, ensure only one stage can run terraform apply at a time. Use state locking with backends like S3+DynamoDB, Azure Blob Storage, or Terraform Cloud to prevent concurrent modifications.
State Locking: Verify your backend supports state locking:
terraform {
backend "s3" {
dynamodb_table = "terraform-locks"
}
}Azure-Specific Issue: On Azure, the error may trigger based on blob 'Last Modified' timestamp changes rather than actual content changes. Consider using Terraform Cloud or Enterprise for more reliable state locking.
Terragrunt Users: If using Terragrunt across multiple environments, ensure each environment has isolated state. A common cause is sharing state between stages when using artifacts.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform