The SQLITE_NOLFS error occurs when SQLite tries to create a database file larger than 2GB on a system without Large File Support (LFS). This typically happens on older filesystems, 32-bit systems, or mismatched OS/filesystem configurations that cannot handle files exceeding the 2GB limit.
The "SQLITE_NOLFS: Uses OS features not supported on host" error (error code 22) indicates that SQLite attempted to use operating system features that are not available or supported on the current system. Most commonly, this error appears when dealing with large file support (LFS) limitations. **What is Large File Support (LFS)?** Large File Support is an OS-level feature that allows the filesystem to handle files larger than 2GB (2^31 bytes). Without LFS: - 32-bit systems can only create files up to 2GB - FAT32 filesystems have a 4GB maximum file size - Some older Unix systems don't have LFS enabled by default When SQLite tries to grow beyond this limit and the system doesn't support it, you get the SQLITE_NOLFS error. **When This Error Occurs:** 1. **Database reaches 2GB on 32-bit system**: Your database file naturally grows and hits the 2GB boundary 2. **WAL mode with large transactions**: Write-Ahead Logging on systems without LFS 3. **Creating large backups or exports**: Writing to filesystem without LFS support 4. **Docker containers with limited storage**: Constrained storage backend without proper LFS setup 5. **Old filesystem mount without LFS flag**: Legacy systems where LFS wasn't enabled at mount time 6. **Cross-compilation mismatch**: Compiled for 32-bit but running on different architecture **Different from Other Errors:** - **SQLITE_TOOBIG**: Occurs when inserting a single blob/value larger than max allowed - **SQLITE_FULL**: Occurs when disk space is actually exhausted - **SQLITE_IOERR**: Occurs when I/O operations fail - **SQLITE_NOLFS**: Occurs specifically when file size would exceed OS limits
First, verify whether your system and SQLite are 32-bit or 64-bit, and whether LFS is available.
Check system architecture:
# Check if system is 32-bit or 64-bit
uname -m
# Output examples:
# - x86_64: 64-bit
# - i686 or i386: 32-bit
# - aarch64: 64-bit ARM
# - armv7l: 32-bit ARM
# Check if OS is 32-bit or 64-bit
file /bin/bash
# Look for "ELF 64-bit" or "ELF 32-bit"
# List available libraries (check for 32-bit if only 32-bit appears, system may be 32-bit only)
ls -la /lib*/ld-* | head -3Check SQLite version and capabilities:
# Check SQLite version
sqlite3 --version
# Get detailed SQLite info with Node.js
npm list sqlite3 better-sqlite3
# Or check in Python:
python3 -c "import sqlite3; print(sqlite3.sqlite_version); print(sqlite3.version)"
# Compile options (if available)
sqlite3 << EOF
PRAGMA compile_options;
EOF
# Look for entries like: THREADSAFE, ENABLE_FTS5, etc.Check filesystem and mount options:
# Check filesystem type and mount options
df -T /path/to/database
# Look for "ext4", "btrfs", etc. and mount options
# Get detailed mount information
mount | grep "mount-point"
# Check for "ro" (read-only) or "noatime" and similar
# Check if filesystem supports large files
stat /path/to/database.db
# File size should show how large it currently is
# For Docker: check storage driver
docker info | grep "Storage Driver"
# Look for "overlay2", "btrfs", "devicemapper", etc.If you're on a 32-bit system, the most reliable fix is to upgrade to a 64-bit environment.
Upgrade operating system:
# If you're running 32-bit Linux on 64-bit capable hardware
# Back up your data first
sudo cp -r /path/to/database.db /path/to/database.db.backup
# Upgrade to 64-bit version of your OS
# For Ubuntu:
# Download 64-bit ISO, create bootable USB, and perform fresh install
# Or use wubi (deprecated) or similar upgrade tool
# For other distributions, follow official 64-bit upgrade path
# Generally: download 64-bit version, backup data, reinstall, restore dataSwitch to 64-bit SQLite build:
If your system is 64-bit but you're using 32-bit SQLite (rare but possible):
# For Node.js - reinstall sqlite3/better-sqlite3 for 64-bit
npm rebuild sqlite3 --build-from-source
# Or switch package:
npm uninstall better-sqlite3
npm install better-sqlite3@latest
# For Python
pip uninstall sqlite3 # Usually built-in
# Ensure you're using 64-bit Python
python3 --version
file $(which python3)
# Should show "ELF 64-bit"
# For Ruby
gem uninstall sqlite3
gem install sqlite3Verify 64-bit after upgrade:
# Confirm system is now 64-bit
uname -m # Should show x86_64, aarch64, etc.
file /bin/bash # Should show "ELF 64-bit"
# Verify SQLite is 64-bit
sqlite3 << EOF
PRAGMA database_list;
EOF
# Test with Python
python3 << EOF
import sys
print(f"Python: {sys.maxsize > 2**32 and '64-bit' or '32-bit'}")
import sqlite3
conn = sqlite3.connect(':memory:')
print(f"SQLite version: {sqlite3.sqlite_version}")
EOFIf your filesystem supports LFS but it's not enabled at the mount point, remount with LFS enabled.
Check if filesystem can support LFS:
# Check filesystem capabilities
lsblk -o NAME,FSTYPE /path/to/database
# Look for ext4, btrfs, xfs, etc. (all modern filesystems support LFS)
# FAT32 does NOT support large files - avoid using it for SQLite
# NTFS supports large files on Windows/WSL
# For Docker volumes: check underlying filesystem
docker volume inspect volume-name | grep Mountpoint
ls -la /var/lib/docker/volumes/*/ # Check host pathRemount filesystem with LFS enabled:
# For Linux filesystems (ext4, btrfs, xfs, etc.)
# Get current mount point
mount | grep /path/to/database
# Remount with LFS support (usually default on modern systems)
sudo mount -o remount,defaults /mount-point
# Or explicitly enable if needed
sudo mount -o remount,rw,exec /mount-point
# Verify remount succeeded
mount | grep /mount-point
# Test if file size restrictions are lifted
sudo dd if=/dev/zero of=/mount-point/test-2gb bs=1M count=2048
ls -lah /mount-point/test-2gb
rm /mount-point/test-2gb # Clean up test fileFor Docker volumes:
# If using Docker named volume with size constraints
docker volume create --opt type=tmpfs --opt device=tmpfs --opt o=size=10G large_db
# Check Docker storage driver supports large files
docker info | grep "Storage Driver"
# If using overlay2 or btrfs, LFS is supported by default
# If using older devicemapper, upgrade Docker or change driver
# For bind mounts: ensure host filesystem has LFS
mkdir -p /host/data/path
mount --bind /host/data/path /container/mount/pointUpdate /etc/fstab for persistent mounting (Linux):
# Edit fstab to add mount at startup
sudo nano /etc/fstab
# Add or modify line:
# /dev/sdXY /mount-point ext4 defaults,rw,exec 0 0
# Reload fstab
sudo mount -a
# Verify
mount | grep /mount-pointTest that your environment can handle large files and that SQLite can properly use them.
Create test database and grow it:
# Create a test database and intentionally grow it large
sqlite3 /tmp/test-lfs.db << EOF
-- Create test table
CREATE TABLE large_data (
id INTEGER PRIMARY KEY,
content BLOB
);
-- Insert data to grow database
-- This creates approximately 500MB of data
INSERT INTO large_data (content)
SELECT randomblob(1000000)
FROM (WITH RECURSIVE cnt(x) AS (
SELECT 1 UNION ALL SELECT x+1 FROM cnt WHERE x < 500
))
SELECT x FROM cnt;
-- Check database size
PRAGMA page_count;
PRAGMA page_size;
EOF
# Check actual file size
ls -lah /tmp/test-lfs.db
# Try growing it further towards 2GB (if disk space allows)
# This shows if the system can handle large filesTest with Python:
import sqlite3
import os
db_path = '/tmp/test-lfs.db'
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
# Create table
cursor.execute('''CREATE TABLE IF NOT EXISTS test_data (
id INTEGER PRIMARY KEY,
data BLOB
)''')
# Insert large binary data
print("Inserting test data...")
chunk_size = 100_000_000 # 100MB chunks
for i in range(5): # 500MB total
data = os.urandom(chunk_size)
cursor.execute('INSERT INTO test_data (data) VALUES (?)', (data,))
print(f"Inserted chunk {i+1}/5")
conn.commit()
# Check database file size
file_size = os.path.getsize(db_path)
print(f"Database size: {file_size / 1024 / 1024:.1f} MB")
# Verify database integrity
cursor.execute('PRAGMA integrity_check')
result = cursor.fetchone()
print(f"Integrity check: {result[0]}")
conn.close()For Node.js with better-sqlite3:
const Database = require('better-sqlite3');
const crypto = require('crypto');
const fs = require('fs');
const db = new Database('/tmp/test-lfs.db');
// Create table
db.exec(`CREATE TABLE IF NOT EXISTS test_data (
id INTEGER PRIMARY KEY,
data BLOB
)`);
// Insert large chunks
console.log('Inserting test data...');
const chunkSize = 100_000_000; // 100MB
const insert = db.prepare('INSERT INTO test_data (data) VALUES (?)');
for (let i = 0; i < 5; i++) {
const data = crypto.randomBytes(chunkSize);
insert.run(data);
console.log(`Inserted chunk ${i+1}/5`);
}
// Check size
const stats = fs.statSync('/tmp/test-lfs.db');
console.log(`Database size: ${(stats.size / 1024 / 1024).toFixed(1)} MB`);
// Integrity check
const result = db.prepare('PRAGMA integrity_check').get();
console.log(`Integrity check: ${result['integrity_check']}`);
db.close();If test fails with SQLITE_NOLFS:
- System or SQLite is 32-bit - upgrade to 64-bit (Step 2)
- Filesystem doesn't support LFS - choose different storage location
- Docker disk space exhausted - allocate more space to Docker
- File size limit enforced - check mount options and remount with LFS enabled
Docker containers may encounter SQLITE_NOLFS if storage is constrained or using incompatible storage driver.
Check and adjust Docker storage allocation:
# Check current Docker storage usage
docker system df
# View detailed storage info
docker info | grep -i storage
# Get available disk space
df -h /var/lib/docker
# If using Docker Desktop (Mac/Windows), increase disk allocation
# For Docker Desktop: Settings → Resources → Disk Image Size
# Increase to at least 10-20GB for databasesVerify Docker storage driver supports large files:
# Check current storage driver
docker info | grep "Storage Driver"
# Recommended drivers (all support LFS):
# - overlay2: Modern default, supports large files
# - btrfs: Supports large files
# - xfs: Supports large files
# - ext4: Supports large files
# If using old devicemapper or vfs, upgrade/change driver
# Edit /etc/docker/daemon.json
sudo nano /etc/docker/daemon.json
# Add or modify:
# {
# "storage-driver": "overlay2"
# }
# Restart Docker
sudo systemctl restart docker
# Verify change
docker info | grep "Storage Driver"Use named volume instead of bind mount:
# Create named volume with larger size if supported
docker volume create large-db-storage
# Use in docker-compose.yml
services:
app:
volumes:
- large-db-storage:/app/data
volumes:
large-db-storage:
driver: local
# Alternatively, use bind mount with explicit path on 64-bit filesystem
docker run -v /data/large-storage:/app/data myappDocker Compose configuration for large databases:
version: '3.8'
services:
app:
image: myapp:latest
volumes:
# Use named volume or explicit host path with LFS support
- database-storage:/app/data
environment:
DATABASE_PATH: /app/data/database.db
deploy:
resources:
limits:
cpus: '2'
memory: 2G # Ensure sufficient memory
volumes:
database-storage:
driver: local
driver_opts:
# Ensure large file support if using NFS or special driver
type: nfs
o: addr=10.0.0.1,vers=4,soft,timeo=180,bg,tcpKubernetes PersistentVolume with LFS support:
apiVersion: v1
kind: PersistentVolume
metadata:
name: database-pv
spec:
capacity:
storage: 50Gi # Large enough for growing database
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: fast
hostPath:
path: /data/sqlite-db # Ensure this is on 64-bit filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast
resources:
requests:
storage: 50GiIf current location has filesystem limitations, migrate the database to a path that fully supports LFS.
Identify suitable location:
# Check all mounted filesystems for LFS support
df -T
# ext4, btrfs, xfs: Full LFS support
# FAT32, exFAT: No LFS support - avoid these
# Find location with most space and LFS support
du -sh /*
df -h /home /var /opt
# Pick mount point with:
# - ext4, btrfs, or xfs filesystem
# - Plenty of free space (1.5x database size recommended)
# - Write access for your userBackup current database:
# Create backup at current location
cp /old/location/database.db /old/location/database.db.backup
# Or use SQLite dump for text backup
sqlite3 /old/location/database.db ".dump" > /old/location/database.sqlMove database to new location:
# For simple relocation with WAL files
cp /old/location/database.db /new/location/database.db
cp /old/location/database.db-wal /new/location/database.db-wal 2>/dev/null
cp /old/location/database.db-shm /new/location/database.db-shm 2>/dev/null
# Set proper permissions
chmod 644 /new/location/database.db*
# Update application config to point to new location
# Edit config file, environment variables, or code
# Test with new location
sqlite3 /new/location/database.db "PRAGMA integrity_check;"
# Once verified, remove old copy
rm /old/location/database.dbUpdate application to use new path:
# Environment variable
export DATABASE_PATH=/new/location/database.db
# Or in config file
# database:
# path: /new/location/database.db
# Or in code
const dbPath = '/new/location/database.db';
const db = new Database(dbPath);For Docker applications:
# docker-compose.yml
services:
app:
volumes:
# Mount LFS-capable directory from host
- /data/sqlite-databases:/app/data
environment:
DATABASE_PATH: /app/data/database.dbRestore from backup if needed:
# If migration caused issues, restore from backup
rm /new/location/database.db*
cp /old/location/database.db.backup /new/location/database.db
# Or restore from SQL dump
sqlite3 /new/location/database.db < /old/location/database.sqlUnderstanding Large File Support (LFS) on Different Systems:
LFS is a POSIX standard allowing files larger than 2^31-1 bytes (2GB). Without LFS:
- open() syscall uses 32-bit file offsets (max 2GB)
- off_t is 32 bits instead of 64 bits
- File size growth fails when exceeding 2^31-1
Modern systems support LFS by default, but some edge cases still occur:
32-bit vs 64-bit considerations:
- 32-bit userland: Cannot use files > 2GB even on 64-bit kernel
- 64-bit system with 32-bit SQLite: LFS depends on 32-bit or 64-bit build
- Cross-compiled binaries: May have LFS disabled
Filesystem-specific LFS:
- ext4: Full LFS support (default on modern Linux)
- btrfs: Full LFS support
- xfs: Full LFS support (128PB max file size)
- NTFS (Windows/WSL): Full LFS support (16EB max file size)
- FAT32: Maximum 4GB file size - NOT suitable for SQLite
- exFAT: Maximum 16EB file size
- APFS (macOS): Full LFS support
- HFS+ (older macOS): Full LFS support
Virtual environment constraints:
Docker can artificially limit file sizes depending on storage driver:
- overlay2: No artificial limit (limited by filesystem)
- btrfs: Limited by btrfs subvolume quota
- xfs: No artificial limit on xfs
- devicemapper: May have 10-100GB limitations
Compile-time options affecting LFS:
When building SQLite from source, these define large file support:
- Linux/Unix: Automatically detected via configure script
- _FILE_OFFSET_BITS=64: Enables LFS on 32-bit systems
- SQLITE_ENABLE_LOCKING_STYLE: Needed for proper WAL locking with large files
Network filesystems and LFS:
NFS versions and configuration affect LFS:
- NFSv3: LFS support depends on mount options
- NFSv4: Full LFS support built-in
- SMB/CIFS: LFS support varies by server
For network storage:
# Check NFS version
mount | grep nfs
# Look for "vers=4" for full LFS
# Check SMB capabilities
smbstatusPerformance implications of large databases:
When approaching 2GB limits (even on 64-bit systems):
- Page cache efficiency decreases
- VACUUM operations become slower
- WAL checkpoint times increase
- Consider partitioning or archiving old data
Alternative solutions to SQLITE_NOLFS:
1. Database sharding: Split large database into multiple files by range
2. Time-based archiving: Archive old records to separate database
3. Connection pooling: With SQLite networking (dragonfly, litestream)
4. Upgrade to PostgreSQL/MySQL: For very large datasets requiring >10GB
Detecting SQLITE_NOLFS programmatically:
import sqlite3
try:
conn = sqlite3.connect('/path/to/database.db')
cursor = conn.cursor()
# Trigger large file operation
cursor.execute('CREATE TABLE large (id INT, data BLOB)')
except sqlite3.OperationalError as e:
if 'NOLFS' in str(e):
print("Large file support not available")
print("Upgrade to 64-bit or enable LFS on filesystem")const Database = require('better-sqlite3');
try {
const db = new Database('/path/to/database.db');
db.exec('CREATE TABLE large (id INT, data BLOB)');
} catch (error) {
if (error.message.includes('NOLFS')) {
console.error('Large file support not available');
console.error('Upgrade to 64-bit or enable LFS on filesystem');
}
throw error;
}SQLITE_CORRUPT_VTAB: Content in virtual table is corrupt
Content in virtual table is corrupt
SQLITE_IOERR_WRITE: Disk I/O error during write
Disk I/O error during write operation
SQLITE_READONLY: Attempt to write a readonly database
How to fix "SQLITE_READONLY: Attempt to write a readonly database" in SQLite
SQLITE_CONSTRAINT_PRIMARYKEY: PRIMARY KEY constraint failed
How to fix "SQLITE_CONSTRAINT_PRIMARYKEY" in SQLite
SQLITE_READONLY_DBMOVED: Database file has been moved since opened
How to fix 'SQLITE_READONLY_DBMOVED: Database file has been moved since opened'