A Terraform provisioner (local-exec or remote-exec) executed a command that failed with exit status 1. This occurs when the underlying script or command returns a non-zero exit code, causing the provisioner to fail and mark the resource as tainted.
Exit status 1 is a generic non-zero exit code that indicates command failure. When Terraform runs a provisioner, it captures the exit code of the executed command. Any non-zero exit status (like 1) causes Terraform to consider the provisioner failed. By default, this failure marks the resource as tainted, requiring it to be destroyed and recreated on the next terraform apply. The error message often masks the actual command failure, making root cause diagnosis difficult.
Enable Terraform debug logging to see more detailed error output:
# Linux/macOS
export TF_LOG=DEBUG
terraform apply 2>&1 | tee terraform.log
# Windows (PowerShell)
$env:TF_LOG="DEBUG"
terraform apply | Tee-Object -FilePath terraform.logReview the log file to find the actual error message before the exit status 1.
Run the exact command outside of Terraform to identify the failure:
For local-exec:
# Test your command directly
/path/to/your/script.sh
echo "Exit code: $?"For remote-exec, SSH to the remote instance and run the command:
ssh -i /path/to/key.pem user@remote-host
# Then run your command
your-command-hereThis reveals the actual error instead of Terraform's generic exit status message.
By default, inline commands execute independently. Add set -e to stop on first error:
Before (stops at last command only):
provisioner "remote-exec" {
inline = [
"apt-get update",
"apt-get install nginx",
"systemctl start nginx"
]
}After (stops at first error):
provisioner "remote-exec" {
inline = [
"set -e",
"apt-get update",
"apt-get install nginx",
"systemctl start nginx"
]
}The set -e flag causes the script to exit immediately if any command returns non-zero.
For file provisioners, ensure paths exist and permissions are correct:
# Verify source file exists before provisioning
provisioner "file" {
source = "${path.module}/scripts/deploy.sh" # Use path.module for relative paths
destination = "/tmp/deploy.sh"
connection {
type = "ssh"
user = "ubuntu"
private_key = file(var.private_key_path)
host = aws_instance.example.public_ip
}
}
# Make script executable
provisioner "remote-exec" {
inline = [
"set -e",
"chmod +x /tmp/deploy.sh",
"/tmp/deploy.sh"
]
connection {
type = "ssh"
user = "ubuntu"
private_key = file(var.private_key_path)
host = aws_instance.example.public_ip
}
}If the provisioner failure isn't critical, allow Terraform to continue:
provisioner "local-exec" {
command = "curl http://example.com/webhook || true" # || true prevents exit 1
on_failure = continue # Don't taint resource if provisioner fails
}Or use || true in your command to return exit code 0 even if the command fails.
Note: Use this sparingly. If the provisioner is critical, fix the underlying issue instead.
If using remote-exec, verify the connection works:
# Test SSH connection
ssh -i /path/to/key.pem -v user@remote-host echo "Connected"
# If using WinRM on Windows instances
# Test with: powershell -Command "Test-WSMan -ComputerName remote-host -ErrorAction SilentlyContinue"Common remote-exec connection issues:
- Security group doesn't allow SSH (port 22) or WinRM (port 5985/5986)
- Instance still initializing (add depends_on or waiter)
- Incorrect username for the AMI (ubuntu, ec2-user, admin, etc.)
- Private key file permissions too open (chmod 600 on Unix)
Why provisioners are discouraged:
HashiCorp recommends treating provisioners as a last resort because they:
1. Make infrastructure less reproducible (different results on retry)
2. Hide implicit dependencies not shown in Terraform graph
3. Create coupling between Terraform and external tools
Better alternatives:
- User data scripts: Use user_data or user_data_base64 on cloud instances
- Cloud-init: Pass initialization via cloudinit_config data source
- Configuration management: Use Ansible, Chef, Puppet, or SaltStack
- Container images: Bake configuration into AMIs or Docker images via Packer
- CI/CD integration: Trigger post-Terraform deployments via pipelines
Debugging provisioner issues:
- Set TF_LOG=DEBUG environment variable for verbose output
- Check CloudWatch logs or system logs on remote instances
- Test commands manually before adding to provisioners
- Use connection blocks to specify SSH/WinRM details explicitly
- Consider using on_failure = continue only as temporary workaround
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform