This error occurs when attempting to delete or recreate an AWS ECS cluster that still has active services. You must delete or deregister all services before the cluster can be deleted.
AWS ECS prevents deletion of clusters that contain running or active services. When Terraform attempts to destroy or update an ECS cluster that has services deployed, AWS returns the ClusterContainsServicesException error. This is a safety mechanism to prevent accidental data loss and ensures proper resource cleanup order.
Before attempting deletion, check what services are currently deployed:
aws ecs list-services --cluster <cluster-name> --region <region>
aws ecs describe-services --cluster <cluster-name> --services <service-name> --region <region>This shows all active services and their status (ACTIVE, DRAINING, INACTIVE).
Delete the ECS services before attempting to delete the cluster. In Terraform, ensure aws_ecs_service resources are defined and will be destroyed first:
resource "aws_ecs_service" "example" {
name = "example-service"
cluster = aws_ecs_cluster.example.id
task_definition = aws_ecs_task_definition.example.arn
desired_count = 1
}
resource "aws_ecs_cluster" "example" {
name = "example-cluster"
}Terraform will automatically destroy services before the cluster due to implicit dependencies.
If you have complex dependencies, add explicit depends_on to ensure proper destruction order:
resource "aws_ecs_cluster" "example" {
name = "example-cluster"
depends_on = [
aws_ecs_service.example
]
}This ensures services and their IAM policies are cleaned up before the cluster.
If services are stuck in DRAINING state, scale down the desired task count:
aws ecs update-service --cluster <cluster-name> --service <service-name> --desired-count 0 --region <region>Wait for tasks to fully drain before proceeding with cluster deletion.
If using EC2 launch type, deregister all container instances from the cluster:
aws ecs list-container-instances --cluster <cluster-name> --region <region>
aws ecs deregister-container-instance --cluster <cluster-name> --container-instance <instance-arn> --force --region <region>Remove all capacity provider associations before deletion.
After all services and instances are removed, retry the Terraform destroy:
terraform destroyIf the destroy still hangs, you may need to wait a few minutes for AWS to fully propagate the service deletions (eventual consistency), then run destroy again.
This error is caused by AWS eventual consistency - when you delete an ECS service, it takes a moment for that deletion to propagate across all AWS systems. If Terraform attempts to delete the cluster immediately after deleting the service, AWS may not have registered the service deletion yet. Known issue in terraform-provider-aws#4852. For Fargate services, use capacity_providers carefully as they can introduce unexpected dependency chains. If using CloudFormation stacks via Terraform's aws_cloudformation_stack resource, ensure proper resource ordering in your template. Consider using depends_on to explicitly control resource destruction order and avoid race conditions during service teardown.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform