Terraform state becomes inconsistent when resources_processed flag stays false after operations. This typically occurs due to state lock issues, interrupted operations, or provider problems that leave the state file in an incomplete state.
The resources_processed flag in Terraform state tracks whether all resources in a configuration have been fully processed during an apply or plan operation. When this flag remains false, it indicates that Terraform was interrupted before completing the processing cycle, or the provider encountered an error that prevented state synchronization. This leaves your infrastructure state out of sync with your Terraform configuration, potentially causing subsequent operations to fail or behave unexpectedly.
Before making any changes, create a backup of your state file to prevent data loss:
terraform state pull > terraform.tfstate.backupFor remote state backends, download and backup the state:
aws s3 cp s3://your-bucket/terraform.tfstate terraform.tfstate.backupIf your backend supports state locking (S3+DynamoDB, Consul, TFE, etc.), check for stuck locks:
terraform force-unlock <LOCK_ID>Obtain the LOCK_ID from the error message when you attempt an operation. Only force-unlock if you are certain no other Terraform processes are running.
Reconcile your state with actual infrastructure:
terraform refreshThis queries all providers to update the state file with current resource properties. Do not run this if your infrastructure has been manually modified.
For Terraform 1.4+, use:
terraform apply -refresh-onlyReinitialize your Terraform working directory to ensure all backends and providers are properly configured:
terraform initIf migrating between backends:
terraform init -migrate-stateIf reconfiguring the same backend:
terraform init -reconfigureGenerate a plan to see what Terraform thinks needs to change:
terraform planReview the output carefully. If the plan shows unexpected changes, you may need to manually edit the state or investigate why resources diverged from configuration.
Once you have confirmed the plan is correct, apply the changes:
terraform applyMonitor the logs closely for errors. If the operation is interrupted again, repeat the recovery process from step 2.
In rare cases where state file corruption is severe, you may need to restore from a backup or rebuild the state file manually using terraform import for critical resources. For remote backends like S3, ensure your DynamoDB table (if used for locking) has proper permissions and isn't in a stuck state. If using Terraform Cloud/Enterprise, check your organization's run queue and cancel any pending runs before attempting recovery. Consider using terraform show and terraform state show <resource> to inspect individual resource state before making changes.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform