This error occurs when Terraform cannot access your S3 bucket used for remote state storage. It typically results from missing IAM permissions, incorrect AWS credentials, or misconfigured backend settings.
Terraform stores infrastructure state in an S3 bucket to enable collaboration and maintain state consistency. When Terraform tries to read or write state, AWS returns a 403 Forbidden error if the credentials used lack required permissions or the bucket configuration denies access. This is a permissions issue between your AWS identity (user or role) and the S3 bucket.
Ensure your AWS credentials are set up and accessible to Terraform. Check via:
aws sts get-caller-identityThis command returns the AWS account ID, ARN, and username. Verify it matches the account and user you expect. If this fails, configure credentials using:
aws configureOr set environment variables:
export AWS_ACCESS_KEY_ID=your_access_key
export AWS_SECRET_ACCESS_KEY=your_secret_key
export AWS_DEFAULT_REGION=us-east-1Use AWS CLI to directly test access to the S3 bucket:
aws s3 ls s3://your-bucket-name/If this command fails with "NoSuchBucket" or an access denied error, the bucket either does not exist or your credentials cannot access it. If it succeeds, proceed to check IAM permissions. Double-check the bucket name in your Terraform configuration (terraform/backend.tf) for typos.
Your IAM user or role needs these minimum permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::your-bucket-name/*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketVersioning"
],
"Resource": "arn:aws:s3:::your-bucket-name"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem"
],
"Resource": "arn:aws:dynamodb:*:*:table/your-lock-table"
}
]
}If using DynamoDB for state locking, include the DynamoDB permissions. Check your IAM policy in the AWS Console under "Users" or "Roles" to ensure these actions are allowed.
Delete the local .terraform directory and terraform.tfstate files to remove any stale references:
rm -rf .terraform terraform.tfstate terraform.tfstate.backupThen reinitialize:
terraform initThis forces Terraform to fetch the latest state from the S3 backend fresh. This resolves issues where local state refers to deleted buckets or old backend configurations.
Get more details about what Terraform is doing when accessing S3:
export TF_LOG=DEBUG
terraform init 2>&1 | tee terraform-debug.logSearch the log for lines starting with "ERROR" or containing "Access Denied" to see the exact AWS API call failing. This helps identify whether the issue is s3:ListBucket, s3:GetObject, or a different permission.
The S3 bucket itself may have a policy explicitly denying access. View and modify the bucket policy via:
aws s3api get-bucket-policy --bucket your-bucket-nameLook for any Deny statements targeting your AWS account or IAM principal. If found, remove or modify the policy to allow your IAM user/role access. To set a permissive policy:
aws s3api put-bucket-policy --bucket your-bucket-name --policy file://policy.jsonwhere policy.json contains your desired policy.
Check your backend configuration for typos and correct values:
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "prod/terraform.tfstate"
region = "us-east-1"
# Optional: enable state locking
dynamodb_table = "terraform-locks"
}
}Common mistakes:
- Typo in bucket name
- Region does not match bucket location
- Missing or incorrect key path
- Using assume_role without proper credential chain setup
In CI/CD environments, explicitly provide credentials in the backend block instead of relying on environment discovery:
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "prod/terraform.tfstate"
region = "us-east-1"
access_key = var.aws_access_key
secret_key = var.aws_secret_key
}
}Or pass credentials via environment variables during terraform init:
export AWS_ACCESS_KEY_ID=$CI_AWS_ACCESS_KEY
export AWS_SECRET_ACCESS_KEY=$CI_AWS_SECRET_KEY
terraform initThis prevents profile mismatches between local and CI environments.
For multi-account setups, ensure the S3 bucket account has a bucket policy allowing cross-account access from your Terraform execution role. Example policy for cross-account access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::OTHER_ACCOUNT_ID:root"
},
"Action": ["s3:GetObject", "s3:PutObject", "s3:ListBucket"],
"Resource": ["arn:aws:s3:::bucket-name", "arn:aws:s3:::bucket-name/*"]
}
]
}When using assume_role with S3 backend, ensure the assume_role parameters are specified at the provider level, not in the backend block. The backend does not support role assumptionโit uses the credentials from the AWS provider or environment.
For S3 buckets with KMS encryption, additional kms:Decrypt and kms:GenerateDataKey permissions are required on the KMS key.
Always use least-privilege IAM policies: do not grant S3FullAccess or AdministratorAccess to Terraform users in production.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform