This error occurs when you attempt to connect to PostgreSQL while it is still initializing or recovering from an unexpected shutdown. Typically, waiting for the startup process to complete or restarting the database service resolves the issue.
The "FATAL: the database system is starting up" error is not actually an error condition in the traditional sense, but rather an informational message indicating that the PostgreSQL server is in its initialization phase. After an unclean shutdown or system restart, PostgreSQL must recover by replaying transaction log (WAL) records from the latest checkpoint. During this recovery process, the server accepts connections but rejects queries with this message. The database is inaccessible for regular operations until the recovery process completes, which can take anywhere from seconds to hours depending on the volume of transactions that need to be replayed.
First, verify that the PostgreSQL server is actually running and observe the startup progress:
# Check if PostgreSQL service is running
sudo systemctl status postgresql
# Or check if the process is active
ps aux | grep postgres
# Use pg_isready to check connection status
pg_isready -h localhostIf pg_isready shows "accepting connections", the server is ready. If it shows "rejecting connections", the startup is still in progress.
In most cases, this error resolves itself. The database needs time to complete crash recovery by replaying transaction logs. Wait at least 2-5 minutes before taking further action:
# Monitor PostgreSQL logs to see recovery progress
tail -f /var/log/postgresql/postgresql-15-main.log
# Look for messages like "redo starts at" and "redo done at"For larger databases with many transactions, recovery can take much longer. Monitor the logs to confirm progress rather than assuming the database is stuck.
If the server appears to be stuck, check the data directory for stale lock files or recovery indicators:
# Check for postmaster PID file and lock
ls -la /var/lib/postgresql/15/main/postmaster.pid
# Check if recovery process is active
ls -la /var/lib/postgresql/15/main/recovery.conf
ls -la /var/lib/postgresql/15/main/standby.signalNote: Replace "15" with your actual PostgreSQL version and "/var/lib/postgresql" with your actual data directory location.
If the database has been starting up for an unreasonable amount of time, restart the service:
# Gracefully restart PostgreSQL
sudo systemctl restart postgresql
# Wait and check the status
sleep 10
sudo systemctl status postgresql
# Test the connection
psql -U postgres -d postgres -c "SELECT version();"If a graceful restart doesn't work, try a more forceful approach:
sudo systemctl stop postgresql
sleep 5
sudo systemctl start postgresqlIf this is a standby (replica) server, ensure hot_standby mode is enabled to allow connections during recovery:
# Check the postgresql.conf file
sudo grep hot_standby /etc/postgresql/15/main/postgresql.conf
# It should show: hot_standby = onIf hot_standby is off, enable it:
# Edit the configuration
sudo nano /etc/postgresql/15/main/postgresql.conf
# Find and uncomment/change to:
hot_standby = on
# Reload configuration (no restart needed)
sudo systemctl reload postgresqlIf the startup process is unusually slow, check for resource constraints:
# Check available memory
free -h
# Check disk space
df -h /var/lib/postgresql
# Check CPU usage and load
top -n 1 | head -15If memory is severely limited (especially on VMs), you may need to:
- Allocate more memory to the VM
- Reduce other services to free up memory
- Consider recovering from backups on a server with more resources
Ensure there is adequate disk space for the recovery process to complete.
For production environments, consider implementing monitoring for startup/recovery times using tools like pg_stat_progress_recovery (PostgreSQL 12+). If recovery consistently takes too long, investigate transaction log (WAL) size and checkpoint configuration. In high-traffic systems, increasing the number of WAL files or adjusting checkpoint parameters can reduce recovery time. For Patroni-managed clusters, use "patronictl reinit" to recover a failed instance instead of manual restart. The FATAL message is intentionally logged at FATAL level to ensure it appears regardless of log_min_messages settings, so it is not indicative of actual data corruption unless accompanied by other error messages.
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL
SSL error: certificate does not match host name
SSL error: certificate does not match host name in PostgreSQL
No SSL connection
No SSL connection to PostgreSQL