This error occurs when attempting to encode or decode data containing characters outside the ASCII range (0-127) using ASCII encoding. ASCII is a 7-bit character encoding that cannot represent extended characters, Unicode characters, or any values above 127.
This error indicates that you are trying to work with character data that falls outside the ASCII character set using ASCII encoding. ASCII encoding represents characters using values from 0 to 127, covering only basic Latin letters (a-z, A-Z), digits (0-9), punctuation, and control characters. When Node.js encounters a character with a value greater than 127 (such as accented characters like é, ñ, ü, or any non-Latin character like Chinese, Arabic, emoji, or even extended Latin-1 characters), it cannot encode or decode that value using ASCII encoding and throws a RangeError. This commonly happens when: - Trying to encode strings containing non-ASCII characters with 'ascii' encoding - Using Buffer.from() with 'ascii' encoding on strings with Unicode characters - Working with file data that contains characters outside the ASCII range - Processing user input or external data without proper encoding validation
Replace 'ascii' encoding with 'utf8' encoding, which supports the full Unicode character set:
// Wrong - throws RangeError with non-ASCII characters
const str = 'Hello, Café';
const buf = Buffer.from(str, 'ascii'); // RangeError!
// Correct - use UTF-8 encoding
const str = 'Hello, Café';
const buf = Buffer.from(str, 'utf8');
console.log(buf); // <Buffer 48 65 6c 6c 6f 2c 20 43 61 66 c3 a9>
console.log(buf.toString('utf8')); // "Hello, Café"UTF-8 is the modern standard encoding and handles all Unicode characters including accented letters, emojis, and characters from any language.
If you absolutely must use ASCII encoding (rare legacy scenarios), validate that the string contains only ASCII characters first:
function isAsciiOnly(str) {
return /^[\x00-\x7F]*$/.test(str);
}
const str = 'Hello, Café';
if (isAsciiOnly(str)) {
const buf = Buffer.from(str, 'ascii');
console.log(buf);
} else {
console.warn('String contains non-ASCII characters, cannot use ASCII encoding');
// Use UTF-8 instead
const buf = Buffer.from(str, 'utf8');
console.log(buf);
}This prevents the RangeError and allows graceful fallback to UTF-8.
When reading files that may contain non-ASCII characters, explicitly specify UTF-8:
const fs = require('fs');
// Wrong - assumes data is ASCII
fs.readFile('data.txt', 'ascii', (err, data) => {
if (err) throw err;
console.log(data); // Will fail with non-ASCII content
});
// Correct - use UTF-8
fs.readFile('data.txt', 'utf8', (err, data) => {
if (err) throw err;
console.log(data); // Works with any Unicode content
});
// Or without encoding (returns Buffer)
fs.readFile('data.txt', (err, buffer) => {
if (err) throw err;
const text = buffer.toString('utf8');
console.log(text);
});UTF-8 is the default encoding for most modern files and web content.
Verify what encoding your source data uses and ensure compatibility:
// Identify character codes in your string
const str = 'Hello, Café';
for (let i = 0; i < str.length; i++) {
console.log(`${str[i]}: ${str.charCodeAt(i)}`);
}
// Output shows 'é' has code 233, which is > 127 (outside ASCII range)
// Check if all characters are ASCII (0-127)
const allAscii = [...str].every(ch => ch.charCodeAt(0) <= 127);
console.log(allAscii); // false - contains non-ASCII
// Use appropriate encoding
if (allAscii) {
const buf = Buffer.from(str, 'ascii');
} else {
const buf = Buffer.from(str, 'utf8');
}Understanding character codes helps identify when ASCII encoding is insufficient.
Search your codebase for instances of 'ascii' encoding and replace with 'utf8':
# Find all references to ASCII encoding
grep -r ""ascii"" src/
grep -r "'ascii'" src/
grep -r "ascii" package-lock.jsonThen update each occurrence:
// Before
Buffer.from(data, 'ascii')
buf.toString('ascii')
fs.readFile(path, 'ascii', callback)
// After
Buffer.from(data, 'utf8')
buf.toString('utf8')
fs.readFile(path, 'utf8', callback)Modern applications should use UTF-8 as the default encoding for all text data.
Why ASCII is Obsolete: ASCII was designed in the 1960s for English-only communication and can only represent 128 characters. With modern applications serving global users, using ASCII is a legacy choice that causes compatibility issues. UTF-8 is backward-compatible with ASCII (ASCII bytes 0-127 have the same meaning in UTF-8) while supporting unlimited Unicode characters.
UTF-8 Encoding Details: UTF-8 uses variable-length byte sequences. ASCII characters (0-127) use 1 byte, characters 128-2047 use 2 bytes, characters 2048-65535 use 3 bytes, and higher use 4 bytes. The 'é' character (U+00E9) encodes as 2 bytes in UTF-8: 0xC3 0xA9.
Latin-1 (ISO-8859-1) Alternative: If you specifically need to support extended Latin characters (ASCII + 128 more) and nothing beyond, you can use 'latin1' encoding in Node.js, which supports values 0-255. However, UTF-8 is still preferred for future compatibility.
Stream Encoding: When working with streams, set encoding on the stream itself rather than converting after: stream.setEncoding('utf8'). This properly handles multi-byte characters split across chunk boundaries.
Database Considerations: Ensure your database is configured with UTF-8 encoding (utf8mb4 for MySQL) to properly store and retrieve non-ASCII characters without data corruption.
JSON Serialization: JSON is always transmitted as UTF-8 by specification. If you attempt to serialize UTF-8 strings and decode with ASCII, you'll get RangeErrors. Always use UTF-8 for JSON.
Error: EMFILE: too many open files, watch
EMFILE: fs.watch() limit exceeded
Error: Middleware next() called multiple times (next() invoked twice)
Express middleware next() called multiple times
Error: Worker failed to initialize (worker startup error)
Worker failed to initialize in Node.js
Error: EMFILE: too many open files, open 'file.txt'
EMFILE: too many open files
Error: cluster.fork() failed (cannot create child process)
cluster.fork() failed - Cannot create child process