Resource creation cancelled occurs when Terraform interrupts or cancels the provisioning of a resource, either due to a dependent resource failure, user interruption, or context deadline. The state file may be left inconsistent with actual cloud resources.
This error indicates that Terraform initiated the cancellation of a resource creation operation. Unlike a hard failure from the cloud provider, cancellation happens when Terraform itself decides to stop the operation—typically because a prerequisite resource failed, the user pressed Ctrl+C, a deadline was reached, or a context was cancelled. This often leaves the infrastructure in an incomplete state where some resources exist in the cloud but Terraform's state file doesn't reflect them.
The 'Resource creation cancelled' message is usually a cascade effect. Look at the full Terraform output to find the original error that triggered the cancellation:
terraform apply 2>&1 | tee apply.logSearch the log for the first error before any cancelled messages. The root cause could be:
- A network timeout
- Invalid resource configuration
- Missing required arguments
- Provider authentication failure
After interruption or cancellation, your state may be out of sync with the cloud:
# Refresh state to see what actually exists
terraform refresh
# Check what Terraform thinks exists
terraform state list
# View the state for a specific resource
terraform state show aws_instance.exampleIf resources exist in the cloud but not in your state, you'll need to import them.
If a resource was created in the cloud but Terraform's state wasn't updated due to cancellation:
# Find the resource ID in your cloud provider console
# For AWS EC2:
terraform import aws_instance.example i-1234567890abcdef0
# For AWS RDS:
terraform import aws_db_instance.example mydb
# For Azure:
terraform import azurerm_resource_group.example /subscriptions/{subId}/resourceGroups/mygroupAfter importing, run terraform plan to confirm no unintended changes.
If cancellation is cascading through dependent resources, review your resource dependencies:
resource "aws_instance" "app_server" {
# Ensure security group exists first
vpc_security_group_ids = [aws_security_group.main.id]
# Or use explicit dependency
depends_on = [aws_security_group.main]
}Use terraform graph to visualize dependencies:
terraform graph | dot -Tsvg > graph.svgRun Terraform with reduced parallelism to create resources sequentially. This makes it easier to see which specific resource fails:
# Create resources one at a time instead of in parallel (default is 10)
terraform apply -parallelism=1This approach also prevents cascading cancellations—if one resource fails, only that one is affected, not a batch of dependent resources.
Get detailed traces of where and why cancellation occurred:
export TF_LOG=trace
terraform apply 2>&1 | tee terraform-debug.log
# Search for cancellation context
grep -i 'cancel\|deadline\|context' terraform-debug.logLook for messages like "context cancelled," "context deadline exceeded," or "run context exists, stopping." These indicate the exact point where Terraform initiated the cancellation.
Ensure you have valid credentials and the cloud provider is accessible:
# Test AWS credentials
aws sts get-caller-identity
# Test Azure login
az account show
# Test GCP credentials
gcloud auth listAlso check the cloud provider's status page for any service degradation or outages that might cause cancellations.
Resource creation cancellation is particularly problematic because it can leave infrastructure in an inconsistent state. Terraform's interrupt handling has improved over versions, but if you're running terraform apply in an automated environment (CI/CD) that sends SIGTERM signals to all processes, Terraform may be killed mid-provision, causing the provider plugin to die abruptly and leaving resources partially created.
When cancellation happens during apply, Terraform attempts to gracefully shut down, but the state file may not be updated to reflect what was actually created in the cloud. This is why terraform import and terraform refresh are critical recovery steps.
For production deployments, consider: (1) using lifecycle prevent_destroy on critical resources, (2) splitting large deployments into smaller modules, (3) configuring appropriate timeouts, and (4) implementing proper monitoring and alerting for incomplete state.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform