This error occurs when PostgreSQL encounters characters that cannot be represented in the database's character encoding. Common causes include encoding mismatches between client and server, or attempting to insert Unicode characters into a limited encoding like LATIN1.
PostgreSQL enforces strict character encoding rules. When you attempt to insert or process data containing characters that aren't valid in your database's encoding, PostgreSQL raises this error. The error indicates that the character repertoire (the set of characters supported) of the target encoding is too small to represent the character being inserted. For example, if your database uses LATIN1 (which only supports Western European characters) and you try to insert a Chinese character encoded as UTF-8, this error will occur.
Connect to your PostgreSQL database and run these commands:
SHOW server_encoding;
SHOW client_encoding;Note the values. If server_encoding is LATIN1 or another limited encoding, that's likely the root cause.
Temporarily set the client encoding to match UTF-8:
SET client_encoding = 'UTF8';Then retry your INSERT or UPDATE statement. If this works, the issue is an encoding mismatch.
The best long-term solution is to use UTF-8, which supports all Unicode characters:
CREATE DATABASE mydb ENCODING 'UTF8';If you have an existing database, you'll need to:
1. Dump the database:
pg_dump old_db > backup.sql2. Create a new UTF-8 database:
CREATE DATABASE new_db ENCODING 'UTF8';3. Restore the data:
psql new_db < backup.sqlIf you can't change the database encoding, clean the source data to remove unsupported characters:
-- Remove invalid UTF-8 byte sequences
SELECT convert_from(convert_to(your_column, 'UTF8'), 'UTF8');
-- Or filter to ASCII-only (if appropriate for your use case)
SELECT regexp_replace(your_column, '[^\x20-\x7E]', '', 'g');If importing from a file, check and convert the file's encoding:
# Check current file encoding
file -i yourfile.csv
# Convert to UTF-8 if needed
iconv -f LATIN1 -t UTF-8 input.csv > output.csvThen import the converted file into PostgreSQL.
PostgreSQL's strict encoding enforcement is a feature, not a bug. Some databases allow mixed encodings, leading to data corruption and subtle bugs. PostgreSQL enforces consistency.
If you're using Ruby on Rails or similar ORMs, ensure the config/database.yml specifies encoding: utf8mb4 (or UTF8 for PostgreSQL). Some older applications use latin1 by default.
For production databases, UTF-8 is the modern standard. LATIN1 and other single-byte encodings should only be used for legacy systems. If you're storing user-generated content, international text, or emojis, UTF-8 is essential.
The convert_to() and convert_from() functions can be helpful for handling data that's already in the database but in the wrong encoding, but they should be used cautiously as they can lose data if the target encoding can't represent the characters.
ERROR: syntax error at end of input
Syntax error at end of input in PostgreSQL
Bind message supplies N parameters but prepared statement requires M
Bind message supplies N parameters but prepared statement requires M in PostgreSQL
Multidimensional arrays must have sub-arrays with matching dimensions
Multidimensional arrays must have sub-arrays with matching dimensions
ERROR: value too long for type character varying
Value too long for type character varying
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL