The 403 Forbidden error in Terraform occurs when your credentials lack sufficient permissions to perform the requested action on cloud resources. This requires verifying IAM permissions, credentials configuration, and API access policies.
The 403 Forbidden error indicates an authentication/authorization failure at the HTTP level. Unlike 401 (authentication failed), a 403 error means the request was authenticated successfully, but the authenticated user/service account does not have permission to perform the requested action. In Terraform context, this typically occurs when: - Your cloud provider credentials are valid but don't have sufficient IAM permissions - API tokens or service accounts lack the required scopes - Resource-based access policies restrict your identity - Cross-account or cross-organization access policies are blocking the operation - The cloud provider's organization policies (SCPs, OPA, etc.) deny the action This is a common issue when using cloud accounts with restricted permissions or when running Terraform in CI/CD environments with limited service account privileges.
First, confirm which credentials Terraform is actually using. Different credential sources are checked in priority order:
For AWS:
aws sts get-caller-identity
terraform console
> data.aws_caller_identity.current.account_idFor GCP:
gcloud auth list
gcloud config get-value project
terraform console
> data.google_client_config.default.projectFor Azure:
az account show
terraform console
> data.azurerm_subscription.current.subscription_idThis confirms:
1. You're using the intended credentials
2. The identity is authenticated (not a 401 error)
3. The credentials point to the correct account/project/subscription
Verify what permissions your identity actually has.
AWS - Check IAM user/role policies:
aws iam get-user
aws iam list-attached-user-policies --user-name <username>
aws iam list-user-policies --user-name <username>For a role:
aws iam list-attached-role-policies --role-name <role-name>
aws iam list-role-policies --role-name <role-name>GCP - Check service account roles:
gcloud projects get-iam-policy <project-id> --flatten="bindings[].members" --filter="bindings.members:serviceAccount:<account-email>"Azure - Check role assignments:
az role assignment list --assignee <principal-id>
az role definition list --name "Contributor"Compare the actual permissions with what your Terraform configuration needs.
Terraform debug logs show the precise API endpoint and response, revealing what permission is missing.
Linux/macOS:
export TF_LOG=DEBUG
export TF_LOG_PATH=terraform-debug.log
terraform plan
grep -i "403\|forbidden\|error" terraform-debug.log | head -20Windows PowerShell:
$env:TF_LOG="DEBUG"
$env:TF_LOG_PATH="terraform-debug.log"
terraform plan
Select-String -Path terraform-debug.log -Pattern "403|forbidden|error" | Select-Object -First 20The debug logs will show:
- Exact API endpoint being called
- Request parameters
- Full response including which permissions are needed
- Resource type and action that failed
Once you identify the missing permission, add it to your user/role.
AWS - Add inline policy to user:
aws iam put-user-policy --user-name <username> --policy-name terraform-policy --policy-document file://terraform-policy.jsonExample AWS policy (restricted):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*",
"s3:GetObject",
"s3:PutObject",
"iam:GetRole",
"iam:PassRole"
],
"Resource": "*"
}
]
}GCP - Assign role to service account:
gcloud projects add-iam-policy-binding <project-id> \
--member=serviceAccount:<account-email> \
--role=roles/compute.adminAzure - Assign role to service principal:
az role assignment create \
--assignee <principal-id> \
--role "Contributor" \
--scope /subscriptions/<subscription-id>Note: Use the principle of least privilege - grant only the minimum required permissions.
Some 403 errors come from organization-level policies, not IAM.
AWS Service Control Policies (SCPs):
# Check if SCPs deny the action
aws organizations list-policies --filter SERVICE_CONTROL_POLICY
aws organizations describe-policy --policy-id <policy-id>If an SCP explicitly denies an action, even the AWS account root user cannot perform it. Contact your AWS organization administrator.
GCP Organization Policy:
gcloud resource-manager org-policies list --project=<project-id>
gcloud resource-manager org-policies describe \
--project=<project-id> \
--policy-type=compute.trustedImageProjectsAzure Management Groups:
az account management-group list
az role assignment list \
--scope /providers/Microsoft.Management/managementGroups/<group-id>If organization policies are too restrictive, request exemptions from your organization administrator or use a different resource that's not blocked.
If accessing resources in a different account or organization:
AWS - Assume role in another account:
# First, your identity needs sts:AssumeRole permission
aws sts assume-role --role-arn arn:aws:iam::123456789:role/TerraformRole --role-session-name terraformTerraform configuration:
provider "aws" {
assume_role {
role_arn = "arn:aws:iam::123456789:role/TerraformRole"
}
}GCP - Use service account from different project:
provider "google" {
project = "target-project-id"
credentials = file("~/path/to/service-account-key.json")
}The service account needs the required role bindings in the target project.
Azure - Use managed identity:
provider "azurerm" {
features {}
subscription_id = "<target-subscription-id>"
}Ensure your managed identity has the necessary role assignments in the target subscription.
Some resources have their own access controls that can cause 403 errors.
AWS S3 bucket policy (blocking state backend access):
aws s3api get-bucket-policy --bucket terraform-stateGCS bucket IAM:
gsutil iam get gs://terraform-state-bucketAzure Storage account network rules:
az storage account show --name <account-name> \
--query networkAcksIf using remote state with S3/GCS/Azure Storage, ensure the credentials have:
- s3:GetObject, s3:PutObject for S3
- storage.objects.get, storage.objects.create for GCS
- Microsoft.Storage/storageAccounts/blobServices/containers/blobs/* for Azure
Some cloud APIs require additional verification.
AWS MFA for sensitive operations:
If your policy requires MFA:
# Check if MFA is required
aws iam get-user
aws iam list-mfa-devices --user-name <username>If MFA is configured, you may need to use temporary credentials from sts:GetSessionToken:
aws sts get-session-token --serial-number <mfa-device-arn> --token-code <mfa-code>IP-based access restrictions:
# Test connectivity to the API endpoint
curl -I https://ec2.amazonaws.comIf your organization restricts API access by IP, ensure your current network is whitelisted.
Azure AD Conditional Access:
Check if your Azure AD tenant has policies that block access from your location/device.
Isolate the issue by testing the exact operation with your cloud provider's CLI:
AWS - Try the failing operation:
# If Terraform fails creating an EC2 instance
aws ec2 run-instances --image-id ami-0c55b159cbfafe1f0 --instance-type t2.microGCP - Try the failing operation:
# If Terraform fails creating a compute instance
gcloud compute instances create test-instance --zone=us-central1-aAzure - Try the failing operation:
# If Terraform fails creating a resource group
az group create --name test-rg --location eastusIf the CLI command works, the issue is Terraform-specific (likely cached credentials or provider configuration).
If the CLI command also fails with 403, it confirms the permission issue is real and you need to grant permissions.
Understanding the 403 vs 401 distinction:
401 (Unauthorized) means authentication failed - your credentials don't exist or are invalid.
403 (Forbidden) means authentication succeeded but authorization failed - your credentials are valid but lack permission.
This distinction matters because 401 usually means "fix your credentials", while 403 means "request access from an administrator."
Terraform credential precedence (AWS):
1. Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
2. Terraform variables (access_key, secret_key in provider block)
3. Credentials file (~/.aws/credentials)
4. IAM role from EC2 instance metadata
5. AWS SSO credentials
If you have multiple credential sources configured, Terraform uses the first one it finds. This can cause confusion if old credentials are still in environment variables.
Cross-account access patterns:
When accessing resources in multiple AWS accounts, use AssumeRole with a trust relationship. The source account role needs sts:AssumeRole permission, and the target account role needs a trust policy that allows the source account/role.
Terraform state file permissions:
If Terraform can't read your state file from remote storage (S3/GCS/Azure), the infrastructure code can't be executed. This commonly causes 403 errors during terraform apply when trying to read existing state.
Service account best practices:
- Create dedicated service accounts for Terraform, not shared accounts
- Use short-lived credentials (assume role with expiry) rather than long-lived keys
- Rotate service account keys regularly
- Use different service accounts for different environments (dev, staging, prod)
- Grant minimum required permissions (principle of least privilege)
- Audit service account activity in CloudTrail (AWS) or Cloud Audit Logs (GCP)
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform