Your AWS RDS PostgreSQL instance has exhausted its allocated storage capacity. Resolve this by increasing storage, enabling autoscaling, or removing unnecessary data. Immediate action is required to prevent database unavailability.
This error occurs when an Amazon RDS for PostgreSQL instance reaches its maximum allocated storage capacity. When the database cannot allocate space for new data, writes fail with errors like "could not extend file: No space left on device". Unlike self-hosted PostgreSQL where you control filesystem sizing, RDS enforces hard storage limits. Once you hit this limit, the database enters a degraded state where normal operations become impossible. AWS prevents further writes to protect database integrity, but this leaves you in a critical situation requiring immediate intervention.
Check your RDS instance details to confirm the storage issue:
1. Open AWS Console > RDS > Databases
2. Click your PostgreSQL instance
3. Scroll to "Storage" section - verify "Storage allocated" vs "Storage used"
4. Check "Monitoring" tab for "Free Storage Space" metric (should show 0 or near 0)
5. Verify status is "storage-full"
This confirms the issue is storage capacity, not application-level disk checks.
The fastest recovery path is enabling autoscaling to automatically increase storage:
1. Go to RDS console > Databases > Your instance
2. Click "Modify"
3. Scroll to "Storage" section
4. Enable "Storage autoscaling"
5. Set "Maximum storage limit" (e.g., 100 GB if currently at 20 GB)
6. Click "Apply immediately"
7. Database will be available during this operation
RDS will automatically increase storage when free space drops below the autoscaling threshold. This prevents future storage-full errors without manual intervention.
Replication slots prevent WAL cleanup, often consuming significant space:
-- Connect as master user and list replication slots
SELECT slot_name, slot_type, restart_lsn, active
FROM pg_replication_slots
ORDER BY active DESC;
-- Check if standby is still active
SELECT client_addr, state FROM pg_stat_replication;
-- Drop inactive slots (only if standby is permanently decommissioned)
SELECT pg_drop_replication_slot('slot_name');Dropping unused slots immediately releases retained WAL files, potentially freeing 10+ GB depending on your workload.
Once space is freed, vacuum tables to recover space from deleted rows:
-- Analyze current bloat
SELECT schemaname, tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as size
FROM pg_tables
WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC
LIMIT 10;
-- Run standard VACUUM (non-blocking, maintains read access)
VACUUM ANALYZE;
-- For critical tables with significant bloat, use VACUUM FULL (requires exclusive lock)
-- Only run during maintenance windows
VACUUM FULL table_name;Standard VACUUM reclaims space but doesn't return it to OS. VACUUM FULL returns space but requires an exclusive lock and extra temporary disk space.
PostgreSQL and RDS maintain logs and temp files that can be safely removed:
-- Check log retention period (RDS parameter)
SELECT name, setting FROM pg_settings
WHERE name LIKE '%log%' ORDER BY name;
-- View current parameter group settings
-- RDS console > Parameter Groups > Your group > rds.log_retention_period
-- Reduce log retention if set too high (e.g., from 10080 minutes to 1440)
-- Modify parameter group or create new one with lower retentionIn RDS console:
1. Go to Parameter Groups
2. Create new parameter group (if using default)
3. Set rds.log_retention_period = 1440 (1 day instead of 3 days)
4. Apply parameter group to instance
This prevents future excessive log accumulation.
Once autoscaling is enabled, configure appropriate thresholds:
1. RDS Console > Databases > Your instance
2. Click "Modify"
3. Under Storage autoscaling:
- "Autoscaling enabled": Yes
- "Maximum storage limit": Set based on budget (10-20% above current use as buffer)
- Default autoscaling triggers at ~10% free space, increases by ~5% or 40GB
4. Under Enhanced Monitoring:
- Enable "Enhanced monitoring" to track FreeStorageSpace
- Set CloudWatch alarms for critical thresholds:
- Alert at 15% free space (warning)
- Alert at 10% free space (critical)
- Alert at 5% free space (emergency)
5. Click "Apply immediately"
Analyze why storage filled to prevent recurrence:
-- Identify largest tables
SELECT schemaname, tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as size,
(SELECT count(*) FROM pg_class WHERE relname = tablename) as relations
FROM pg_tables
WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
-- Check for uncontrolled table growth
SELECT tablename,
n_tup_ins as inserts,
n_tup_upd as updates,
n_tup_del as deletes
FROM pg_stat_user_tables
ORDER BY n_tup_ins + n_tup_upd + n_tup_del DESC;
-- Check autovacuum settings for high-churn tables
SELECT schemaname, tablename, autovacuum_enabled FROM pg_tables;If a single table is consuming most space:
- Implement retention policies (delete old rows automatically)
- Enable table partitioning for cleaner data lifecycle
- Increase autovacuum frequency for that table:
ALTER TABLE large_table SET (autovacuum_naptime = '1 min');RDS-specific behaviors:
- Storage increases take a few minutes; the instance remains online
- You cannot decrease RDS storageβonly increase it
- Maximum RDS PostgreSQL storage is 64 TB
- Autoscaling requires at least 5% free space to trigger
WAL retention and replication: RDS manages WAL files automatically, but inactive replication slots prevent cleanup. If you have a failing read replica that will not reconnect, drop the slot immediately. Do NOT delete WAL files directly.
Performance during VACUUM FULL: VACUUM FULL locks tables exclusively. For critical tables, perform during maintenance windows. Use pg_surgery for faster bloat removal on PostgreSQL 15+.
Storage autoscaling limits: While autoscaling prevents future storage-full errors, it scales by percentage. A 100 GB database scaling by 5% per trigger adds only 5 GB. For rapidly growing databases, increase "Maximum storage limit" to pre-allocate more headroom.
Multi-AZ deployments: Storage-full on primary causes replica promotion to fail. Resolve on primary first, then verify replication catches up.
Cost considerations: RDS charges per GB/month. Aggressive autoscaling can increase costs. Balance between safety (high max limit) and budget by reviewing growth trends and setting realistic maximums.
vacuum failsafe triggered
How to fix "vacuum failsafe triggered" in PostgreSQL
PANIC: could not write to file
How to fix PANIC: could not write to file in PostgreSQL
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL