Terraform cannot write state to the configured backend and creates an errored.tfstate file. Common causes include permission issues, expired credentials, or backend conflicts. Use terraform state push to recover.
This error occurs when Terraform attempts to write the updated state to your configured backend (S3, Terraform Cloud, local, etc.) but encounters an issue that prevents the write operation. When this happens, Terraform writes the state to a local file called "errored.tfstate" in your current working directory as a safety mechanism. This allows you to recover the state rather than losing infrastructure changes entirely. The error prevents your state from being synchronized with your actual infrastructure, which can lead to subsequent operations failing or creating inconsistencies.
Read the full error output carefully. It typically includes:
- The specific backend service that failed (S3, GCS, Azure, etc.)
- Error codes like 403 (permission denied), 503 (service unavailable)
- Network-related errors like timeout or connection refused
# Example error messages
Error: Failed to persist state: Error uploading state: AccessDenied: Access Denied
Error: Failed to persist state: Error writing state: context deadline exceeded
Error: Failed to persist state: This workspace is not locked and will only accept state uploads when lockedCheck your current working directory for the errored.tfstate file. This file contains your infrastructure state and is critical for recovery.
ls -la errored.tfstate
# Should output: -rw-r--r-- ... errored.tfstateBack up this file before proceeding:
cp errored.tfstate errored.tfstate.backupAddress the specific cause of the failure. Check backend credentials and permissions:
For AWS S3:
- Verify your AWS credentials are valid and not expired
- Check that your IAM user/role has s3:PutObject and s3:GetObject permissions on the bucket
- If using assume_role, ensure the temporary credentials haven't expired
# Test S3 access
aws s3 ls s3://your-bucket-name/terraform.tfstateFor Terraform Cloud:
- Verify your API token is valid and hasn't expired
- Check organization and workspace settings
- Ensure the workspace is in the correct organization
# Test Terraform Cloud connection
terraform loginFor GCS (Google Cloud Storage):
- Verify IAM permissions on the bucket
- Check that Application Default Credentials are properly set
- Ensure the VM/runner has cloud-platform API scope
- Wait a few minutes if you recently changed IAM permissions (eventual consistency)
For Azure Storage:
- Verify storage account name and key
- Check firewall rules if restrictive
- Ensure the storage account access key hasn't been rotated
Once you've fixed the underlying issue, push the errored state back to the backend:
terraform state push errored.tfstateTerraform will prompt for confirmation:
Do you want to copy the state in the local file to the remote state at aws s3? [y/N]Type 'y' to proceed. This restores your state to the backend.
After pushing the state, verify everything is working correctly:
terraform planIf no changes are shown and the command succeeds, your state has been successfully recovered. If you see unexpected changes or errors, investigate further before running apply.
Once you've confirmed the state is recovered, remove the temporary error file:
rm errored.tfstateKeep your backup until you're certain everything is working:
rm errored.tfstate.backupState Lock Conflicts: If the error mentions "This workspace is not locked and will only accept state uploads when locked", ensure you're not running terraform apply with -lock=false flag. Remove this flag if you used it.
Temporary Token Expiration: With AWS assume_role, temporary credentials expire quickly (default 15-20 minutes). If the error occurs during a long apply operation, the token may have expired mid-operation. Ensure your assume_role session token TTL is long enough for your operations.
Provider Version Mismatches: After updating a provider version, you may get "Failed to persist state: unsupported attribute" errors. The error suggests pinning your provider to the previous version in your required_providers block to maintain state compatibility.
Network Issues: If you see "context deadline exceeded" or timeout errors, check your network connectivity to the backend service. Terraform has configurable timeout values if needed. For S3, ensure your bucket region is correct in your backend configuration.
Workspace Separation: In Terraform Cloud, each workspace maintains separate state. Ensure you're using the correct workspace name and that the remote configuration matches your environment.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform