This error occurs when Terraform cannot locate the S3 bucket specified for the remote backend. The bucket must be created before Terraform can store state files in it.
When you configure Terraform to use an S3 bucket as a remote backend for state storage, Terraform needs to verify that the bucket exists and is accessible during the "terraform init" phase. If the bucket doesn't exist, Terraform fails with this error during state inspection. This is sometimes called the "chicken/egg problem" in infrastructure-as-code because you need infrastructure (the S3 bucket) to exist before you can use Terraform to manage other infrastructure. The S3 bucket for remote state storage must be created outside of Terraform, or bootstrapped manually.
Check your Terraform backend configuration in your code. The bucket name must exactly match an existing S3 bucket in your AWS account.
Look for the backend block in your Terraform files (usually in a root main.tf or backend.tf):
terraform {
backend "s3" {
bucket = "your-bucket-name"
key = "path/to/terraform.tfstate"
region = "us-east-1"
}
}Write down the exact bucket name and region. You'll use these in the next step.
Ensure you have the correct AWS credentials configured and are targeting the correct AWS account and region.
Check your current AWS profile and credentials:
aws sts get-caller-identity
aws s3 lsThis shows which AWS account you're currently authenticated to. The bucket must exist in this account and region.
If you use named AWS profiles, ensure your Terraform backend configuration references the correct profile:
terraform {
backend "s3" {
bucket = "my-bucket"
region = "us-west-2"
profile = "my-profile"
}
}If the bucket doesn't exist, create it using the AWS CLI:
For us-east-1 region:
aws s3api create-bucket --bucket my-terraform-state --region us-east-1For other regions (e.g., us-west-2):
aws s3api create-bucket \
--bucket my-terraform-state \
--region us-west-2 \
--create-bucket-configuration LocationConstraint=us-west-2Note: S3 bucket names must be globally unique across all AWS accounts. If you get an error saying the bucket already exists, try a different name.
Enable versioning for better state management:
aws s3api put-bucket-versioning \
--bucket my-terraform-state \
--versioning-configuration Status=EnabledEnable encryption for security:
aws s3api put-bucket-encryption \
--bucket my-terraform-state \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}]
}'Once the S3 bucket exists and is accessible, run terraform init:
terraform initTerraform should now successfully connect to the S3 backend. You should see output like:
Successfully configured the backend "s3"!If you previously had a different backend configuration and are now switching to S3, you may need to migrate your state:
terraform init -migrate-stateThis will prompt you to confirm migrating state from the old location to the new S3 backend.
For team environments, consider using Terragrunt which can automatically create the S3 bucket and DynamoDB table for state locking. Alternatively, use a bootstrap script to create the backend resources before running terraform init.
If using cross-account AWS access, ensure the IAM principal has the necessary S3 permissions:
- s3:ListBucket on the bucket
- s3:GetObject and s3:PutObject on the state file path
- Optional: s3:DeleteObject for state management
For security, always enable:
1. S3 versioning (recovery of previous state versions)
2. Server-side encryption (protect sensitive state data)
3. Block public access settings
4. DynamoDB table for state locking if multiple team members access Terraform
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform