The "keepers value has changed" error occurs when a random resource's keeper values are modified between Terraform plan and apply phases. This causes Terraform to detect inconsistency and fail. Understanding when and why keepers change is critical for reliable infrastructure code.
In Terraform's random provider, the `keepers` argument is a map of arbitrary key-value pairs used to control when a random value should be regenerated. By design, random values are deterministic—once generated, they remain the same across multiple terraform plan and apply cycles unless the keepers themselves change. When you see "keepers value has changed," it means the keeper values that Terraform recorded during the plan phase differ from their values during the apply phase. This creates an inconsistency because the random resource was told to keep its value stable based on certain conditions, but those conditions changed mid-execution. This error typically indicates one of three problems: (1) keeper values are dynamic and change between plan and apply, (2) there's a circular dependency preventing proper evaluation order, or (3) referenced values in keepers haven't been computed yet when Terraform tries to use them.
Examine the keepers block in your random resource configuration. Keepers should only reference values that exist before Terraform starts the apply phase and won't change afterward.
Good example—reference a pre-existing input variable:
resource "random_id" "instance" {
keepers = {
ami_id = var.ami_id # Input variable, stable across plan/apply
}
byte_length = 8
}Bad example—reference a computed resource attribute:
resource "random_id" "instance" {
keepers = {
file_hash = filemd5("./config.txt") # May differ between plan and apply
}
byte_length = 8
}Identify any keepers that reference computed outputs, file operations, or attributes of resources created in the same run.
Replace any computed keeper values with stable references. If you need to regenerate a random value based on a resource's attribute, use depends_on instead.
Replace file hashes with static content if possible:
# Bad: file hash changes during apply
keepers = {
config_hash = filemd5("./config.txt")
}
# Better: reference a static input variable
keepers = {
config_version = var.config_version
}After editing keepers, run:
terraform planThe plan should now show consistent keeper values without proposing replacement.
If the error persists, check for circular references where:
1. Random resource's keepers reference ResourceA
2. ResourceA depends on the random resource's output
This creates a cycle that Terraform can't resolve. Fix it by restructuring your configuration.
Example of a circular dependency:
# WRONG: Circular dependency
resource "random_id" "bucket_id" {
keepers = {
bucket_name = aws_s3_bucket.my_bucket.id # Depends on S3 bucket
}
}
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-bucket-${random_id.bucket_id.hex}" # Depends on random_id
}Fix it by breaking the cycle:
# CORRECT: No circular dependency
resource "random_id" "bucket_id" {
keepers = {
prefix = var.bucket_prefix # Reference input variable only
}
byte_length = 8
}
resource "aws_s3_bucket" "my_bucket" {
bucket = "${var.bucket_prefix}-${random_id.bucket_id.hex}"
}Re-run terraform plan to confirm the cycle is resolved.
If you need the random value to regenerate when another resource changes, use depends_on instead of adding that resource's attributes to keepers.
Good practice:
resource "random_id" "new_id" {
keepers = {
# Only reference stable values
seed = var.regenerate_seed
}
depends_on = [
aws_instance.app # Ensures proper ordering
]
byte_length = 8
}This tells Terraform to respect the dependency without creating inconsistency in keeper values.
If you genuinely need a new random value, don't try to force it through changing keepers. Instead, use the replace option:
terraform apply -replace=random_id.bucket_idThis explicitly tells Terraform to destroy and recreate that resource, generating a new random value. This is safer and clearer than trying to manipulate keepers.
Alternatively, you can change a keeper variable manually:
terraform apply -var="regenerate_seed=2"This changes the keeper value uniformly during apply, avoiding inconsistency errors.
This issue has been partially addressed in newer versions of Terraform and the random provider. Ensure you're running a recent version:
terraform versionUpdate to the latest Terraform and random provider:
terraform init -upgradeCheck your .terraform/versions or terraform.lock.hcl to see which version of the random provider you're using. Consider upgrading to the latest:
terraform {
required_providers {
random = {
source = "hashicorp/random"
version = "~> 3.5" # Use latest stable version
}
}
}After upgrading, run terraform init and test again.
Understanding keeper behavior is essential for predictable infrastructure code. Keepers were designed to give you deterministic control over when random values regenerate—they're a feature, not a bug. The error occurs when Terraform detects that your keeper conditions changed mid-execution, which undermines that predictability.
In production environments, prefer changing keepers through variables passed at apply time rather than through computed resource attributes. This ensures consistency and makes the regeneration explicit and auditable.
For complex scenarios involving file generation, consider using local_file resources in a separate terraform apply step, then reference their hashes in subsequent runs. Alternatively, use external data sources to fetch stable values that can be used in keepers.
When troubleshooting, always check your terraform.lock.hcl to ensure you're using compatible versions of the random provider and Terraform core—version mismatches can sometimes cause consistency issues.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform