This fatal error occurs when PostgreSQL cannot allocate enough memory to execute a query or operation. Common causes include undersized work_mem settings, high concurrent connections, inefficient queries, and insufficient system RAM.
The 'FATAL: out of memory' error indicates that PostgreSQL's memory allocator failed to obtain the requested amount of memory from the operating system. This is a critical error that terminates the query or connection. PostgreSQL memory allocation failures can occur in several scenarios: - **work_mem exhaustion**: Each query operation (sort, hash, join) can consume up to work_mem bytes. Multiple concurrent queries multiplied by their working memory operations can exhaust available RAM. - **Insufficient system RAM**: The database server doesn't have enough total physical and swap memory to handle the current workload. - **Memory leaks**: Unreleased memory due to PostgreSQL bugs or unclosed connections accumulates over time. - **Large result sets**: Queries fetching millions of rows at once require substantial memory to buffer results. - **Connection explosion**: Too many active connections (controlled by max_connections) each maintaining memory buffers and work areas. - **Inefficient queries**: Complex queries with many sorts, hashes, or large JOIN operations consume exponential memory. - **Kernel settings**: The system's memory overcommit policy or shared memory limits may prevent allocation. The key distinction is that 'out of memory' errors are different from 'out of shared memory' errors. Out of memory affects individual query operations, while out of shared memory affects PostgreSQL's shared buffer pool used by all connections.
First, verify your PostgreSQL memory settings and available system resources:
Check PostgreSQL memory settings:
-- Connect to PostgreSQL
psql -U postgres
-- Show current work_mem setting (per operation)
SHOW work_mem;
-- Show shared_buffers (shared cache for all connections)
SHOW shared_buffers;
-- Show max_connections (each connection uses memory)
SHOW max_connections;
-- Show maintenance_work_mem (used by VACUUM, CREATE INDEX)
SHOW maintenance_work_mem;
-- Show effective_cache_size (query planner hint)
SHOW effective_cache_size;Check system available memory:
# Linux: check free memory and swap
free -h
# Show memory breakdown
cat /proc/meminfo | grep -E "MemTotal|MemAvailable|SwapTotal|SwapFree"
# Check if swap is being used (high usage indicates memory pressure)
vmstat 1 5
# Docker/Kubernetes: check container limits
docker stats <container-id> # For Docker containers
kubectl top pod <pod-name> # For Kubernetes podsExample output interpretation:
Typical issue: work_mem=1GB with 100 connections = 100GB potential memory usage
Safe calculation:
- Available RAM = 32GB
- shared_buffers = 8GB (25%)
- Reserve for OS = 4GB
- Available for work_mem = 20GB
- If max_connections = 100, then work_mem should be 20GB / 100 = ~200MBThe most common fix for out of memory errors is to reduce work_mem, which controls memory per sort/hash operation:
Important: Decreasing work_mem may increase query time as operations spill to disk, but it prevents OOM crashes.
Check if queries are spilling to disk:
-- Enable logging of temporary files created
SET log_temp_files = 0; -- Log all temp files (0 = no size limit)
-- Run a problematic query and check logs
SELECT * FROM large_table ORDER BY complex_expression;
-- Check PostgreSQL logs
tail -f /var/log/postgresql/postgresql.log
-- Look for "temporary file" entries indicating spilled sortsReduce work_mem temporarily (current session):
-- Set to a safer value for this session
SET work_mem = '64MB';
-- Now run the problematic query
SELECT * FROM your_table ORDER BY column;Reduce work_mem permanently (all new connections):
# Edit PostgreSQL configuration
sudo nano /etc/postgresql/15/main/postgresql.conf
# Find the line (or add it):
# work_mem = '4MB' # old value
work_mem = '64MB' # new value
# For systems with many connections, use even lower:
# work_mem = '32MB' # 100 connections × 32MB = 3.2GB max
# work_mem = '16MB' # 200 connections × 16MB = 3.2GB max
# Save and restart PostgreSQL
sudo systemctl restart postgresql
# Verify the change took effect
sudo -u postgres psql -c "SHOW work_mem;"Using docker-compose (set via environment):
services:
postgres:
image: postgres:15
environment:
- POSTGRES_INITDB_ARGS="-c work_mem=64MB -c shared_buffers=512MB"Using Docker run:
docker run -d \
-e POSTGRES_INITDB_ARGS="-c work_mem=64MB" \
postgres:15Recommended work_mem values by system:
Small server (2GB RAM, few connections):
work_mem = 32MB
Medium server (16GB RAM, 50 connections):
work_mem = 100MB (16GB / 50 connections / safety factor)
Large server (128GB RAM, 200 connections):
work_mem = 200MB (128GB / 200 connections / safety factor)
Very high concurrency (500+ connections):
work_mem = 16-32MB (risk of spilling to disk but prevents OOM)If out of memory errors persist after reducing work_mem, you may need to balance shared_buffers to free up memory:
Warning: Reducing shared_buffers can impact query performance due to fewer cached pages. Only reduce if necessary.
Check cache hit ratio first:
-- Calculate buffer cache hit ratio
SELECT
sum(heap_blks_read) as heap_read,
sum(heap_blks_hit) as heap_hit,
sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio
FROM pg_statio_user_tables;
-- Should be > 99% for good performance
-- If < 90%, shared_buffers might be too smallReduce shared_buffers if needed:
# Edit PostgreSQL configuration
sudo nano /etc/postgresql/15/main/postgresql.conf
# Find and modify:
shared_buffers = '2GB' # Old value (25% of RAM)
# Change to:
shared_buffers = '1GB' # Reduced value
# Restart PostgreSQL
sudo systemctl restart postgresql
# Verify
sudo -u postgres psql -c "SHOW shared_buffers;"Important considerations:
- Only reduce shared_buffers as a last resort
- Minimum value is typically 128MB
- If you reduce shared_buffers, monitor query performance
- The OS page cache will help fill the gap (up to ~50% of RAM)
- For 24/7 servers, the penalty diminishes over time as the cache warms up
Before throwing hardware at the problem, identify and optimize queries consuming excessive memory:
Find slow and resource-heavy queries:
-- Enable query logging
ALTER SYSTEM SET log_min_duration_statement = 1000; -- Log queries > 1 second
ALTER SYSTEM SET log_statement = 'all'; -- Log all queries
-- Reload configuration
SELECT pg_reload_conf();
-- Or edit postgresql.conf and restartAnalyze a problematic query:
-- Get detailed execution plan
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM orders
WHERE customer_id IN (SELECT id FROM customers WHERE country = 'US')
ORDER BY order_date DESC;
-- Look for:
-- - "Seq Scan" on large tables (add indexes)
-- - "Sort" or "Hash" with high memory usage (add indexes, adjust query)
-- - "Nested Loop" with large inner tables (refactor JOIN)Common optimization techniques:
1. Add indexes to reduce full table scans:
-- Create index on frequently filtered columns
CREATE INDEX idx_orders_customer_id ON orders(customer_id);
CREATE INDEX idx_orders_status ON orders(status) WHERE status != 'completed';
-- Verify index is used
EXPLAIN SELECT * FROM orders WHERE customer_id = 123;2. Reduce result set size:
-- Bad: Fetch all rows then sort in memory
SELECT * FROM orders ORDER BY amount DESC; -- May use work_mem
-- Good: Add LIMIT to reduce memory usage
SELECT * FROM orders ORDER BY amount DESC LIMIT 1000;
-- Better: Let database do pagination efficiently
SELECT * FROM orders ORDER BY amount DESC LIMIT 1000 OFFSET 0;3. Split complex queries:
-- Bad: Multi-step JOIN with sorts (combines memory allocations)
SELECT c.id, o.id, COUNT(*)
FROM customers c
LEFT JOIN orders o ON c.id = o.customer_id
LEFT JOIN items i ON o.id = i.order_id
GROUP BY c.id, o.id
ORDER BY COUNT(*) DESC;
-- Better: Use CTE to reduce intermediate result sets
WITH customer_orders AS (
SELECT customer_id, COUNT(*) as order_count
FROM orders
GROUP BY customer_id
)
SELECT c.id, order_count
FROM customers c
LEFT JOIN customer_orders co ON c.id = co.customer_id
ORDER BY order_count DESC;4. Use EXPLAIN to find memory-intensive operations:
# Run EXPLAIN from command line
sudo -u postgres psql << EOF
EXPLAIN (ANALYZE, BUFFERS, FORMAT json)
SELECT * FROM orders WHERE created_at > '2024-01-01';
EOF
# Look for high "Memory" or "Peak Memory" in outputToo many simultaneous connections each using work_mem can quickly exhaust memory:
Check current connections:
-- See active connections
SELECT pid, usename, application_name, state, query_start
FROM pg_stat_activity
WHERE state != 'idle';
-- Count by user
SELECT usename, count(*) FROM pg_stat_activity GROUP BY usename;
-- See max allowed
SHOW max_connections;Reduce max_connections if too high:
# Edit configuration
sudo nano /etc/postgresql/15/main/postgresql.conf
# Find max_connections (default 100)
max_connections = 200 # Old value
# Reduce to:
max_connections = 100 # Or appropriate for your system
# Safe calculation: max_connections × work_mem should be < 50% of RAM
# Example: 32GB RAM → 50% = 16GB → with 64MB work_mem → max 256 connectionsImplement connection pooling instead of raising max_connections:
# Use pgBouncer or pgpool to multiplex connections
# This allows 1000+ client connections using only 100 backend connections
# Install pgBouncer
sudo apt install pgbouncer
# Configure /etc/pgbouncer/pgbouncer.ini
[databases]
mydb = host=localhost port=5432 dbname=mydb
[pgbouncer]
max_client_conn = 1000 # Client connections
default_pool_size = 20 # Backend connections per database
reserve_pool_size = 5
# Start pgBouncer
sudo systemctl start pgbouncer
# Connect through pgBouncer (port 6432 instead of 5432)
psql -h localhost -p 6432 -d mydbLinux kernel settings can prevent memory allocation even when RAM is available:
Check overcommit settings:
# See current overcommit policy
cat /proc/sys/vm/overcommit_memory
# Possible values:
# 0 = Heuristic overcommit (default, usually OK)
# 1 = Always allow (permissive, risk of OOM killer)
# 2 = Strict, never overcommit (may cause allocation failures)
# If it's 2, check the overcommit ratio
cat /proc/sys/vm/overcommit_ratio
# Default is 50. You can increase it:
# sudo sysctl -w vm.overcommit_ratio=80If using vm.overcommit_memory=2:
# Allow more overcommit by increasing ratio
sudo sysctl -w vm.overcommit_ratio=100
sudo sysctl -w vm.panic_on_oom=0 # Prevent panic, let OOM killer handle it
# Make permanent
echo "vm.overcommit_ratio=100" | sudo tee -a /etc/sysctl.conf
sudo sysctl -pIncrease available swap space:
# Check current swap
free -h
# Create swap file if needed (for cloud instances without swap)
sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
# Make permanent in /etc/fstab
echo "/swapfile none swap sw 0 0" | sudo tee -a /etc/fstabFor Docker/Kubernetes containers:
# Set memory limits in docker-compose.yml
services:
postgres:
image: postgres:15
mem_limit: '4g' # Hard limit
memswap_limit: '6g' # Swap included
# Or in Kubernetes
resources:
limits:
memory: "4Gi"
requests:
memory: "2Gi"Background vacuum operations can consume substantial memory and trigger OOM errors:
Check autovacuum configuration:
-- Show autovacuum settings
SHOW autovacuum;
SHOW autovacuum_work_mem;
SHOW autovacuum_max_workers;
-- autovacuum_work_mem defaults to maintenance_work_mem (usually high)
-- With multiple workers, this multipliesReduce autovacuum memory usage:
# Edit postgresql.conf
sudo nano /etc/postgresql/15/main/postgresql.conf
# Find or add:
autovacuum_work_mem = '256MB' # Instead of maintenance_work_mem
autovacuum_max_workers = 2 # Reduce parallel workers
maintenance_work_mem = '512MB' # Maintenance operations (VACUUM, CREATE INDEX)
# Restart
sudo systemctl restart postgresqlSchedule vacuum during low-traffic periods:
-- Manually trigger vacuum outside peak hours
VACUUM ANALYZE;
-- Or use pg_cron extension
CREATE EXTENSION IF NOT EXISTS pg_cron;
-- Vacuum at 3 AM
SELECT cron.schedule('vacuum-jobs', '0 3 * * *', 'VACUUM ANALYZE;');If out of memory errors persist after optimization, the server lacks sufficient resources:
Evaluate hardware upgrade options:
# Check current hardware
free -h
nproc # CPU cores
uname -m # Architecture
# For cloud servers, consider upgrading instance type
# AWS: t3.medium → t3.large → t3.xlarge
# Azure: Standard_B2s → Standard_B4ms → Standard_D4s_v3
# GCP: e2-medium → e2-standard-2 → e2-standard-4Recommended minimum specs by workload:
Development/Testing:
- 2GB RAM
- 1-2 CPU cores
- Basic SSD
Small production (< 100 connections):
- 8GB RAM
- 2-4 CPU cores
- Fast SSD (IOPS: 1000+)
Medium production (100-500 connections):
- 32GB RAM
- 8 CPU cores
- Fast SSD (IOPS: 5000+)
- Dedicated network
Large production (500+ connections):
- 64GB+ RAM
- 16+ CPU cores
- NVME SSD
- Network optimizationAdd more swap space (temporary workaround):
# Create 8GB swap file
sudo fallocate -l 8G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
# Check
free -h
# Make permanent
echo "/swapfile none swap sw 0 0" | sudo tee -a /etc/fstab
# Note: Swap is much slower than RAM, but prevents crashesUnderstanding PostgreSQL memory contexts:
When PostgreSQL encounters "FATAL: out of memory", it sometimes dumps a "TopMemoryContext" report in the logs. This shows memory allocation by component:
TopMemoryContext: 1234567 bytes
├─ CacheMemoryContext: 567890 bytes
├─ MessageContext: 123456 bytes
└─ ExecutorState: 543210 bytes (largest)ExecutorState typically dominates during query execution. Each sort/hash/join allocates from its own context.
Memory allocation multipliers:
The risk of OOM scales with multiple factors:
1. work_mem × connections: 50MB work_mem × 100 connections = potential 5GB usage
2. work_mem × operations per query: Complex query with 8 sorts = 8 × work_mem
3. Concurrent activity: Vacuum while handling queries = maintenance_work_mem + multiple work_mem allocations
Linux OOM killer vs PostgreSQL error:
If your system completely runs out of memory:
- PostgreSQL gracefully returns "out of memory" error (connection survives)
- Linux OOM killer forcefully terminates processes to free RAM (may kill PostgreSQL entirely)
To avoid OOM killer:
# Check if PostgreSQL is protected
ps aux | grep postgres
# Look for "-17" in the OOM score (negative = protected)
# Set PostgreSQL to be killed last
sudo echo -1000 > /proc/$(pgrep -f "postgres: postmaster")/oom_score_adjDifference between error types:
1. FATAL: out of memory - Query operation failed, connection terminates
2. ERROR: out of memory - Smaller operation failed, query cancels but connection persists
3. PANIC: out of shared memory - Affects shared_buffers, likely server restart needed
Performance trade-offs:
- Reducing work_mem prevents crashes but spills sorts to disk (slower queries)
- Increasing shared_buffers helps cache but limits available work_mem
- Using more connections requires lower per-connection memory (pooling is better)
Monitoring memory in production:
-- Check memory pressure over time
SELECT
datname,
sum(heap_blks_hit) as cache_hits,
sum(heap_blks_read) as cache_misses,
round(100.0 * sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)), 2) as hit_ratio
FROM pg_statio_user_tables
GROUP BY datname;
-- Memory usage by query (requires pg_stat_statements)
SELECT
query,
calls,
mean_time,
max_time,
rows
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;vacuum failsafe triggered
How to fix "vacuum failsafe triggered" in PostgreSQL
PANIC: could not write to file
How to fix PANIC: could not write to file in PostgreSQL
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL