Terraform cannot access the S3 backend state file due to insufficient AWS IAM permissions or misconfigured credentials. This prevents Terraform from reading or managing infrastructure state.
This error occurs when Terraform attempts to load your state file from an S3 backend but lacks the necessary AWS credentials or permissions. The AccessDenied error indicates that while Terraform successfully connected to AWS, the credentials being used do not have permission to read the S3 bucket or the state file within it. This is a critical error because Terraform cannot proceed without accessing the state file.
Check that your AWS credentials are properly set up. Terraform will look for credentials in this order:
1. Environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
2. AWS credentials file: ~/.aws/credentials
3. AWS config file: ~/.aws/config with a profile
4. IAM role (if running on EC2, ECS, or Lambda)
Run this command to verify your current AWS identity:
aws sts get-caller-identityThis will show you which AWS account and IAM principal is being used.
Before troubleshooting Terraform, verify that you can access the S3 bucket using AWS CLI with the same credentials:
# List the bucket contents
aws s3 ls s3://your-terraform-state-bucket/
# Try to read the state file directly
aws s3 cp s3://your-terraform-state-bucket/terraform.tfstate ./test-state.tfstateIf these commands fail with AccessDenied, the issue is with AWS permissions, not Terraform configuration.
Verify that your IAM user or role has the required S3 permissions. The IAM policy must include:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::your-terraform-state-bucket",
"arn:aws:s3:::your-terraform-state-bucket/*"
]
}
]
}The most commonly missing permission is s3:ListBucket, which is required at the bucket level, not the object level.
If your S3 bucket uses encryption with a customer-managed KMS key, add these permissions to your IAM policy:
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey"
],
"Resource": "arn:aws:kms:region:account-id:key/key-id"
}You can find the KMS key ID in the S3 bucket's properties (Server-side encryption section).
If using AWS profiles, ensure your backend configuration explicitly specifies the correct profile:
terraform {
backend "s3" {
bucket = "your-terraform-state-bucket"
key = "terraform.tfstate"
region = "us-east-1"
profile = "your-aws-profile"
encrypt = true
}
}Then run:
export AWS_PROFILE=your-aws-profile
terraform initHowever, note that the profile parameter in backend configuration is not always reliable. Setting the AWS_PROFILE environment variable is more reliable.
Navigate to your S3 bucket in the AWS Console and check the Bucket Policy tab. Look for explicit Deny statements that might block access:
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "*"
}If you find restrictive policies, update them to allow your IAM principal. You may also need to check the bucket's Access Control List (ACL) settings.
For more detailed error information, run Terraform with debug logging enabled:
TF_LOG=DEBUG terraform initThis will show detailed AWS API calls and responses, helping identify the exact permission that is missing. Look for lines mentioning the specific S3 action that failed.
If using cross-account access with role_arn, verify that:
1. The role exists in the target account
2. The role's trust policy allows the source account/user to assume it:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::source-account-id:user/user-name"
},
"Action": "sts:AssumeRole"
}
]
}3. Your IAM user in the source account has sts:AssumeRole permission
4. The role in the target account has the necessary S3 permissions
The profile parameter in S3 backend configuration is unreliable across Terraform versions. Consider using environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_PROFILE) instead. When using Terraform workspaces, add a workspace_key_prefix to avoid permission issues with workspace creation. If you encounter this error on CI/CD, double-check that the service account or role has the correct permissions in the target AWS account. Some organizations use AWS Organizations SCPs (Service Control Policies) that can restrict S3 access at the organization level—verify these are not interfering. For highly secure environments, consider using temporary credentials via STS AssumeRole with specific session duration and external IDs.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform