The 'permission denied' error occurs when Terraform cannot write to a file location due to insufficient permissions. This happens when target directories lack write access, existing files have restrictive permissions, or you're writing to protected system directories. Fix it by using writable directories, adjusting file permissions, or modifying directory ownership.
This error occurs when the Terraform local_file provider attempts to create or modify a file but lacks the necessary write permissions on the target directory or file. The error can surface in several contexts: when using the local_file resource to write configuration files, when Terraform Enterprise agents try to persist state files, or when file provisioners attempt to deploy files to restricted locations. The root cause is a filesystem permission issue. Either the user running Terraform doesn't have write access to the target directory, an existing file has restrictive permissions that prevent reading/updating, or you're attempting to write to a system directory (like /etc/) that requires elevated privileges. This is a security-by-design issue: Terraform respects Unix file permissions and won't bypass them.
Write files to directories where the Terraform-running user has full write access, such as your project directory or /tmp. Avoid system directories like /etc/ or /var/:
# Wrong - system directory requires root
resource "local_file" "config" {
content = "app config"
filename = "/etc/myapp/config.conf"
}
# Correct - use project directory
resource "local_file" "config" {
content = "app config"
filename = "${path.module}/config/config.conf"
}Using ${path.module} ensures the file is created relative to your Terraform module directory, which is typically user-writable.
Always set file permissions that allow Terraform to both write and read the file. Avoid write-only permissions:
resource "local_file" "example" {
content = "Hello world"
filename = "${path.module}/hello.txt"
file_permission = "0644" # Owner: read/write, Others: read only
}Common safe permissions:
- 0644 - Owner reads/writes, others read (standard for config files)
- 0600 - Only owner reads/writes (for sensitive files)
- Never use 0200 (write-only) as Terraform cannot read the file on subsequent runs.
Ensure the target directory exists and has write permissions for the user running Terraform:
# Check current directory permissions
ls -ld /path/to/target/dir
# Add write permission if missing (only if you own the directory)
chmod u+w /path/to/target/dir
# For group write access (shared environments)
chmod g+w /path/to/target/dirThe directory must be writable by the Terraform user. If you don't own it, ask the owner to grant write access or use a different directory.
If a file exists with overly restrictive permissions, manually delete it before re-running Terraform:
# Remove the problematic file
rm /path/to/file
# Re-run Terraform
terraform applyAlternatively, fix the existing file's permissions:
# Make the file readable
chmod u+r /path/to/fileAfter fixing, re-run terraform apply.
When using file provisioners on remote machines, the connection user may lack permissions. Use a two-step approach: copy to /tmp, then move to the protected location with sudo:
provisioner "file" {
source = "myfile.conf"
destination = "/tmp/myfile.conf"
}
provisioner "remote-exec" {
inline = [
"sudo mv /tmp/myfile.conf /etc/myapp/myfile.conf",
"sudo chown root:root /etc/myapp/myfile.conf",
"sudo chmod 644 /etc/myapp/myfile.conf"
]
}This works around the limitation that file provisioners cannot use sudo directly.
If running in Terraform Enterprise with non-root agents, grant the tfc-agent group write access to necessary directories:
# Allow tfc-agent group to write to root home directory
sudo chown -R :tfc-agent /root
sudo chmod -R g+rwx /rootTerraform Enterprise agents run as a non-root user for security. Ensure working directories have group write permissions for the agent's user group. Check your Terraform Enterprise documentation for the specific service user name.
File permissions in Terraform interact with the resource's lifecycle: Terraform reads the file to detect changes, so file permissions must allow both write (for creation) and read (for state detection) operations. This is why write-only permissions fail on subsequent runs.
In containerized environments (Docker, Kubernetes), container users often have limited permissions. Mount volumes with proper ownership (matching the container user's UID/GID) to avoid permission errors.
For shared multi-user environments, consider using a shared directory with group write permissions or a dedicated service account that all users can assume. Never use world-writable directories (777) for security reasons—instead, manage group membership appropriately.
On macOS, permission models differ slightly from Linux. Ensure you're aware of filesystem ACLs if in use. The noexec mount flag on /tmp (common in hardened Linux systems) can also prevent execution of scripts—if applicable, write temporary files to /var/tmp instead.
Error: Error rendering template: template not found
How to fix "template not found" error in Terraform
Error: Error generating private key
How to fix 'Error generating private key' in Terraform
Error creating Kubernetes Service: field is immutable
How to fix "field is immutable" errors in Terraform
Error: line endings have changed from CRLF to LF
Line endings have changed from CRLF to LF in Terraform
Error: Error making HTTP request: 500 Internal Server Error
HTTP 500 Internal Server Error in Terraform