The PostgreSQL "COPY delimiter must be a single character" error occurs when you try to specify a multi-character or multi-byte delimiter in a COPY command. PostgreSQL only accepts single one-byte characters as delimiters. This error typically appears when copying data from files with custom delimiters, and requires either changing your delimiter or pre-processing your data.
PostgreSQL's COPY command has strict limitations on delimiters: they must be exactly one single-byte character. You cannot use multi-character sequences (like "||" or "::"), multi-byte Unicode characters, or escape sequences as delimiters. When PostgreSQL encounters a COPY statement with an invalid delimiter specification, it rejects the entire command at parse time with this error. This is a validation error that occurs before any data transfer begins, preventing the operation from proceeding. The error message is clear: PostgreSQL is telling you that the delimiter parameter in your COPY statement does not meet the single-character requirement. Common mistakes include attempting to use custom delimiters that are two or more characters, Unicode characters outside the ASCII range, or null bytes.
The most common fix is to use a single character delimiter. Check what delimiter you specified and replace it with a single valid character.
-- INCORRECT: Multi-character delimiter
COPY my_table FROM '/path/to/file.txt' WITH (DELIMITER '||');
ERROR: COPY delimiter must be a single character
-- CORRECT: Single character delimiter
COPY my_table FROM '/path/to/file.txt' WITH (DELIMITER '|');
-- Common single-character delimiters:
DELIMITER ',' -- Comma (default for CSV)
DELIMITER '\t' -- Tab (default for text)
DELIMITER '|' -- Pipe
DELIMITER ';' -- SemicolonIf your data file actually uses a multi-character delimiter, you'll need to pre-process it (see advanced notes).
PostgreSQL supports a modern parenthesized syntax for COPY that clearly separates FORMAT from DELIMITER.
-- Modern syntax (recommended)
COPY my_table FROM '/path/to/file.csv' WITH (
FORMAT csv,
DELIMITER ',',
HEADER true,
ENCODING 'utf8'
);
-- Legacy syntax (works but less flexible)
COPY my_table FROM '/path/to/file.txt' WITH DELIMITER ',';
-- Tab-delimited (special escape sequence for single character)
COPY my_table FROM '/path/to/file.tsv' WITH (
FORMAT text,
DELIMITER E'\t' -- Escaped tab character
);If you need to use special characters as delimiters, use the proper PostgreSQL escape syntax.
-- Tab character (most common special case)
DELIMITER E'\t'
-- Newline (rarely used as delimiter)
DELIMITER E'\n'
-- Literal backslash
DELIMITER E'\\'
-- Standard ASCII characters don't need escaping
DELIMITER ' ' -- Space
DELIMITER ',' -- Comma
DELIMITER '|' -- PipeAlways use single quotes or E'' prefix for the character string.
Check what delimiter is actually in your file. Use command-line tools to inspect the first few lines.
# View first few lines of your file
head -5 /path/to/file.txt
# Check what characters are used (look for delimiter patterns)
head -1 /path/to/file.txt | od -c | head
# Count occurrences of potential delimiters
head -1 /path/to/file.txt | grep -o '||' | wc -l # Multi-char delimiter
head -1 /path/to/file.txt | grep -o '|' | wc -l # Single char delimiterIf the file uses multi-character delimiters, you'll need to pre-process it before importing (see step 5).
If your source data uses multi-character delimiters, convert them before importing with COPY.
# Replace || with | using sed
sed 's/||/|/g' original_file.txt > processed_file.txt
psql -c "COPY my_table FROM 'processed_file.txt' WITH (DELIMITER '|')"
# Replace :: with : using sed
sed 's/::/:/g' original_file.txt > processed_file.txt
psql -c "COPY my_table FROM 'processed_file.txt' WITH (DELIMITER ':')"
# Chain multiple sed commands
sed -e 's/||/,/g' -e 's/::/:/g' input.txt > output.txt
# Using awk for more complex transformations
awk -F'{delimiter}' '{print $1, $2, $3}' original_file.txt > processed_file.txtAlternatively, use \copy with pipe:
cat file.txt | sed 's/||/|/g' | psql -c "\copy my_table FROM STDIN WITH (DELIMITER '|')"The \copy command (client-side) combined with pipes allows flexible pre-processing without creating intermediate files.
# Use cat with sed to transform delimiters on-the-fly
cat multichar_delim.txt | sed 's/||/,/g' | psql -c "\copy my_table FROM STDIN WITH (FORMAT csv, DELIMITER ',')"
# More complex example with multiple transformations
cat input.txt | sed -e 's/||/,/g' -e 's/^M//g' | psql -c "\copy my_table FROM STDIN WITH (DELIMITER ',', FORMAT csv)"
# Using Python to transform data on-the-fly
python3 transform.py | psql -c "\copy my_table FROM STDIN WITH (DELIMITER ',')"Note: \copy is a psql client command (not SQL), so use it in shell scripts or interactive psql sessions. For server-side COPY, the file must be readable by the PostgreSQL server process.
PostgreSQL requires that the delimiter be a printable single-byte ASCII character. Binary or NULL data cannot be used as a delimiter.
-- INCORRECT: Cannot use NULL byte
COPY my_table FROM '/path/to/file.txt' WITH (DELIMITER '\x00');
ERROR: COPY delimiter must be a single character
-- CORRECT: Use a valid printable ASCII character
COPY my_table FROM '/path/to/file.txt' WITH (DELIMITER ',');
-- To find NULL bytes in your file:
grep -P '\x00' /path/to/file.txt && echo "NULL bytes found" || echo "No NULL bytes"
-- To strip NULL bytes before import:
tr -d '\0' < input.txt > cleaned.txt
psql -c "COPY my_table FROM 'cleaned.txt' WITH (DELIMITER ',')"PostgreSQL's COPY command is strict about delimiters for performance and compatibility reasons. The single-byte requirement ensures that the parser can quickly identify field boundaries without complex state machines for multi-byte sequences.
If you're working with data from external systems that use multi-character delimiters, consider these approaches:
1. Extract-Transform-Load (ETL): Use a dedicated ETL tool like Apache NiFi, Talend, or Informatica that can handle complex delimiter transformations.
2. Foreign Data Wrapper (FDW): PostgreSQL's file_fdw extension also has the same single-character limitation, but other FDWs (like odbc_fdw or oracle_fdw) may offer more flexibility.
3. Alternate formats: Instead of COPY, use JSONB or XML import if your source data supports structured formats. PostgreSQL can efficiently parse and unnest these.
4. Database drivers: When importing from code (Python, Node.js), use prepared statements with parameterized queries instead of COPY, which gives you full control over delimiter handling:
import psycopg2
conn = psycopg2.connect("...")
cur = conn.cursor()
# Read file with multi-char delimiters and insert row by row
with open("data.txt") as f:
for line in f:
fields = line.rstrip().split("||") # Split by multi-char delimiter
cur.execute("INSERT INTO my_table VALUES (%s, %s, %s)", fields)
conn.commit()5. Stream processing: For very large files, use command-line tools like awk or GNU parallel to split the file into chunks, apply transformations, and load in parallel:
# Process in parallel with GNU parallel
cat large_file.txt | parallel --pipe --block 10M 'sed "s/||/,/g"' | psql -c "\copy my_table FROM STDIN WITH (DELIMITER ',')"Always validate your transformed data sample before loading the entire file. Use COPY FROM to a test table first:
COPY my_table_test FROM '/path/to/transformed_file.txt' WITH (DELIMITER ',');
SELECT COUNT(*) FROM my_table_test; -- Verify record count
SELECT * FROM my_table_test LIMIT 5; -- Spot-check dataERROR: syntax error at end of input
Syntax error at end of input in PostgreSQL
Bind message supplies N parameters but prepared statement requires M
Bind message supplies N parameters but prepared statement requires M in PostgreSQL
Multidimensional arrays must have sub-arrays with matching dimensions
Multidimensional arrays must have sub-arrays with matching dimensions
ERROR: value too long for type character varying
Value too long for type character varying
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL