Terraform encounters shared memory exhaustion, typically in Terraform Enterprise with PostgreSQL backends. This occurs when the system's IPC shared memory is insufficient for the database operations.
The 'Out of Shared Memory' error indicates that PostgreSQL (used by Terraform Enterprise) cannot allocate enough Inter-Process Communication (IPC) shared memory for its operations. Shared memory is a kernel resource used by PostgreSQL for buffer pools, locks, and process coordination. When this limit is exceeded, the database cannot function properly and Terraform operations fail.
Connect to your PostgreSQL database and verify current settings:
SHOW shared_buffers;
SHOW max_locks_per_transaction;
SHOW dynamic_shared_memory_type;For Terraform Enterprise, access the PostgreSQL container logs to identify the exact exhaustion point.
Edit the postgresql.conf file (typically in the PostgreSQL data directory) and increase shared_buffers:
shared_buffers = 512MBFor very high-load environments, consider 1024MB or higher. The rule of thumb is 25% of available system RAM, but PostgreSQL automatically adjusts if you specify too much.
For Terraform Enterprise on mounted disk, copy postgresql.conf from the container to the host mounted disk path and create a bind mount.
In the same postgresql.conf file, increase max_locks_per_transaction:
max_locks_per_transaction = 512This determines how many locks each transaction can hold. High-concurrency environments may need 1024 or more.
If running Terraform Enterprise in Docker, increase the /dev/shm allocation in your docker-compose.yml:
services:
tfe:
shm_size: '1gb'Restart the container after this change. This ensures dynamic_shared_memory_type = posix has sufficient space.
Check and increase kernel limits if PostgreSQL still fails:
cat /proc/sys/kernel/shmmax
cat /proc/sys/kernel/shmallTo increase permanently, edit /etc/sysctl.conf:
kernel.shmmax = 17179869184
kernel.shmall = 4194304Apply changes:
sudo sysctl -pAfter configuration changes, restart PostgreSQL (or the entire TFE container):
# For standalone PostgreSQL
sudo systemctl restart postgresql
# For Docker container
docker-compose restart tfeMonitor the PostgreSQL logs to ensure no errors appear:
tail -f /var/log/postgresql/postgresql.logRun a terraform plan or apply to verify the issue is resolved:
terraform planThe operation should complete without shared memory errors. Monitor system memory and PostgreSQL logs during execution to ensure stability.
For extremely large configurations with hundreds of modules, consider splitting the Terraform workload into smaller units with separate state files. This reduces memory pressure on the PostgreSQL backend. Additionally, monitor memory consumption patterns over time—if memory grows unbounded, you may be experiencing the OOM Killer issue documented in HashiCorp support articles. In such cases, also check for unbounded memory growth in puma processes within the TFE container and consider resource limits or container memory constraints. For Terraform versions 1.6.0+, upgrade to benefit from optimized provider schema caching that significantly reduces memory consumption.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform