Prisma surfaces P2034 when a transaction cannot commit because the database detected a write conflict or deadlock during the commit phase. Retry the transaction with Serializable isolation, a bounded retry loop, and less contention on the affected rows to let the operation finish successfully.
The P2034 error tells you that the database aborted the transaction after detecting either a write conflict or a deadlock. Under the hood the database tracks which rows each transaction locks, and it will roll back the transaction that is less "worthy" in order to preserve serializability. Prisma surfaces that decision instead of sneaking the error into the connection pool, so you see a clean P2034 failure rather than a mysterious downstream corruption. The common patterns are two transactions updating the same row while working under Read Committed, or two transactions holding locks the other needs (a deadlock). This is a safety mechanism found in PostgreSQL (40001), MySQL (1213), SQL Server, and other relational engines, and it becomes more frequent whenever you run heavy concurrent writes or alter the same unique key multiple times in rapid succession.
Enable Prisma query logging or add a middleware to capture the SQL that runs inside the transaction. This helps you see which tables and rows race against each other and whether serialization failures happen in the same transaction repeatedly.
Enable debug logging for the Client:
DEBUG="prisma:query" node ./src/index.jsPrisma middleware for targeted logging:
prisma.$use(async (params, next) => {
if (params.action === "createMany" || params.action === "updateMany") {
console.log("tx query", params.model, params.action, params.args);
}
return next(params);
});Additionally, stream your database's deadlock or serialization logs. In Postgres the system log emits "ERROR: deadlock detected" or "ERROR: could not serialize access". Once you know the affected tables, you can adjust the transaction scope or locking behavior.
Raise the isolation level for the transaction so the database treats concurrent runs as if they executed serially. This surfaces P2034 during the commit phase and keeps the data consistent even when queries overlap.
await prisma.$transaction(
[
prisma.order.update({ where: { id }, data: { status: "processing" } }),
prisma.inventory.updateMany({ where: { productId }, data: { reserved: { decrement: quantity } } }),
],
{
isolationLevel: Prisma.TransactionIsolationLevel.Serializable,
}
);Serializable isolation is usually more aggressive, but combining it with retries makes the application resilient to the transient conflicts that trigger P2034.
Wrap the transaction in a bounded retry loop so a conflicted transaction is attempted again after the competing transaction commits. Add a backoff to avoid thundering herds.
const MAX_RETRIES = 5;
let retries = 0;
while (retries < MAX_RETRIES) {
try {
await prisma.$transaction([/* operations */], {
isolationLevel: Prisma.TransactionIsolationLevel.Serializable,
});
break;
} catch (error) {
if (error.code !== "P2034") {
throw error;
}
retries++;
await new Promise((resolve) => setTimeout(resolve, 50 * retries));
}
}Stop retrying after a short number of attempts to avoid locking up resources. Log each retry so you can spot whether the conflict rate is increasing under load.
If two different processes fight for the same data, serialize the work with explicit locks or a single worker.
1. Use row-level locking before the heavy writes:
await prisma.$executeRaw\`
SELECT id FROM "Order" WHERE id = $1 FOR UPDATE
\`(orderId);2. Use Postgres advisory locks when you want to serialize by business key:
await prisma.$executeRaw\`SELECT pg_advisory_xact_lock(${customerId})\`;3. Or queue the operation in a single worker (Redis stream, job queue) so only one process touches the record at a time.
Reducing parallel writes to the same record stops P2034 at the source.
Why the database chooses P2034
Postgres, MySQL, SQL Server, and other relational engines detect both write conflicts and deadlocks and kill one of the conflicting transactions rather than corrupting data. Postgres reports this as error code 40001 and MySQL as 1213, but Prisma normalizes them into P2034. As you raise isolation or run duplicate writes, you are more likely to trigger these safe failures, which is why a retry loop is essential.
CockroachDB and SQLite
These databases already operate at Serializable isolation, so Prisma will surface P2034 more often even for innocent concurrency. Reduce contention or break work into small batches where possible.
Monitoring tips
Track the P2034 rate in your logs or telemetry. If retries per minute climb under load, consider batching writes, splitting high-traffic endpoints, or introducing queue-based serialization.
P1013: The provided database string is invalid
The provided database string is invalid
P1000: Authentication failed against database server
Authentication failed against database server
P1010: User was denied access on the database
How to fix "P1010: User was denied access on the database" in Prisma
P5008: Usage exceeded, upgrade your plan (Accelerate)
How to fix "Usage exceeded, upgrade your plan" in Prisma Accelerate
P3021: Foreign keys cannot be created on this database
How to fix 'P3021: Foreign keys cannot be created on this database' in Prisma