CloudFront AccessDenied errors occur when your AWS credentials lack required permissions. This guide covers IAM permission issues, S3 bucket policies for origins, and certificate permissions needed for successful distribution creation.
When Terraform attempts to create a CloudFront distribution, AWS returns an AccessDenied error if your credentials don't have the necessary CloudFront permissions. This is a credential/IAM issue rather than a Terraform configuration problem. CloudFront requires multiple permission levels: permissions to create the distribution itself, permissions to access origins (like S3 buckets), and permissions for any SSL/TLS certificates you're using. The error occurs during plan or apply when Terraform tries to communicate with AWS CloudFront APIs.
Check which AWS credentials Terraform is using by examining environment variables and the Terraform AWS provider configuration.
# Check environment variables
echo $AWS_ACCESS_KEY_ID
echo $AWS_SECRET_ACCESS_KEY
echo $AWS_PROFILE
# Or verify which profile is active
aws sts get-caller-identityThe output shows the account ID and IAM user/role being used. Note this for the next step.
Attach a policy to your IAM user or role that grants CloudFront permissions. The minimal policy includes cloudfront:CreateDistribution, cloudfront:UpdateDistribution, and related actions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudfront:CreateDistribution",
"cloudfront:UpdateDistribution",
"cloudfront:GetDistribution",
"cloudfront:ListDistributions",
"cloudfront:DeleteDistribution",
"cloudfront:GetDistributionConfig",
"cloudfront:TagResource"
],
"Resource": "*"
}
]
}Go to the AWS IAM console, find your user/role, and attach this policy (or add these statements to an existing policy).
If your CloudFront distribution uses an S3 bucket as an origin, your IAM role needs S3 permissions. Additionally, the S3 bucket policy must allow CloudFront to access objects.
Create or update the S3 bucket policy to allow CloudFront's Origin Access Control:
resource "aws_s3_bucket_policy" "cloudfront_access" {
bucket = aws_s3_bucket.origin.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "AllowCloudFrontAccess"
Effect = "Allow"
Principal = {
Service = "cloudfront.amazonaws.com"
}
Action = "s3:GetObject"
Resource = "${aws_s3_bucket.origin.arn}/*"
Condition = {
StringEquals = {
"AWS:SourceArn" = "arn:aws:cloudfront::${data.aws_caller_identity.current.account_id}:distribution/${aws_cloudfront_distribution.main.id}"
}
}
}
]
})
}Also add S3 permissions to your IAM policy if creating or modifying bucket policies.
If using S3 as an origin, use Origin Access Control (OAC) instead of the deprecated Origin Access Identity (OAI). This provides better security and requires fewer manual bucket policy updates.
resource "aws_cloudfront_origin_access_control" "s3" {
name = "s3-oac"
origin_access_control_origin_type = "s3"
signing_behavior = "always"
signing_protocol = "sigv4"
}
resource "aws_cloudfront_distribution" "main" {
# ... other config ...
origin {
domain_name = aws_s3_bucket.origin.bucket_regional_domain_name
origin_access_control_id = aws_cloudfront_origin_access_control.s3.id
origin_id = "s3-origin"
}
}This approach is more maintainable and requires fewer permissions than OAI.
If your distribution uses a custom domain with an ACM certificate, add ACM permissions to your IAM policy:
{
"Effect": "Allow",
"Action": [
"acm:DescribeCertificate",
"acm:ListCertificates"
],
"Resource": "*"
}Also ensure your certificate is in the us-east-1 region, as CloudFront only uses certificates from that region.
After updating IAM permissions, wait 2-5 minutes for AWS to propagate the changes. Then test with Terraform:
terraform planIf permissions are still not working, run aws sts get-caller-identity again to confirm you're using the right credentials. Permissions can take a few minutes to become effective in AWS.
CloudFront permissions are separate from S3 and other service permissions. Even if you have full S3 access, CloudFront requires its own IAM permissions. For cross-account access scenarios where your CloudFront is in one AWS account and the S3 bucket is in another, you'll need additional trust relationship configurations. Always use Origin Access Control (OAC) with SigV4 signing for S3 origins rather than the legacy OAI method - it's more secure and requires less manual bucket policy management.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform