DynamoDB returns LimitExceededException when you exceed the concurrent control plane operation limits. This error occurs when trying to create, update, or delete more than 10 tables simultaneously, or when the cumulative total of tables and indexes in CREATING, UPDATING, or DELETING state exceeds 500.
The LimitExceededException error with "Subscriber limit exceeded" in DynamoDB indicates that you've hit AWS limits on concurrent control plane operations. These are administrative operations that modify table structure or configuration, not data operations like reads and writes. The primary limits are: 1. **10 concurrent table operations**: Only 10 CreateTable, UpdateTable, and DeleteTable requests can run simultaneously in any combination 2. **500 cumulative operations**: The total number of tables and indexes in CREATING, UPDATING, or DELETING state cannot exceed 500 3. **Per-subscriber restrictions**: These limits apply per AWS account in a given region This error commonly occurs when: - Using infrastructure-as-code tools (Terraform, CloudFormation, Serverless Framework) to deploy many tables at once - Running parallel database migrations that create or modify multiple tables - Performing bulk table updates or configuration changes - Creating tables with multiple global secondary indexes simultaneously (each GSI counts toward the limit) The error message may specify "Only 10 tables can be created, updated, or deleted simultaneously" to indicate the exact limit violated.
For CloudFormation or Terraform, configure explicit dependencies to force sequential table creation:
Terraform example:
resource "aws_dynamodb_table" "table1" {
name = "users-table"
billing_mode = "PAY_PER_REQUEST"
hash_key = "userId"
attribute {
name = "userId"
type = "S"
}
}
resource "aws_dynamodb_table" "table2" {
name = "orders-table"
billing_mode = "PAY_PER_REQUEST"
hash_key = "orderId"
attribute {
name = "orderId"
type = "S"
}
# Force sequential creation
depends_on = [aws_dynamodb_table.table1]
}
resource "aws_dynamodb_table" "table3" {
name = "products-table"
billing_mode = "PAY_PER_REQUEST"
hash_key = "productId"
attribute {
name = "productId"
type = "S"
}
depends_on = [aws_dynamodb_table.table2]
}CloudFormation example:
Resources:
UsersTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: users-table
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
OrdersTable:
Type: AWS::DynamoDB::Table
DependsOn: UsersTable # Sequential dependency
Properties:
TableName: orders-table
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: orderId
AttributeType: S
KeySchema:
- AttributeName: orderId
KeyType: HASHKey points:
- Use depends_on in Terraform to chain table creations
- Use DependsOn in CloudFormation to control execution order
- Chain dependencies to ensure no more than 10 operations run concurrently
Add delays and batching logic when creating tables programmatically:
// AWS SDK v3 example with batching
import { DynamoDBClient, CreateTableCommand } from '@aws-sdk/client-dynamodb';
const client = new DynamoDBClient({ region: 'us-east-1' });
const CONCURRENT_LIMIT = 8; // Stay below 10 to be safe
const BATCH_DELAY_MS = 30000; // 30 seconds between batches
async function createTablesInBatches(tableConfigs) {
const batches = [];
for (let i = 0; i < tableConfigs.length; i += CONCURRENT_LIMIT) {
batches.push(tableConfigs.slice(i, i + CONCURRENT_LIMIT));
}
for (const [index, batch] of batches.entries()) {
console.log(`Creating batch ${index + 1}/${batches.length} (${batch.length} tables)...`);
const promises = batch.map(config =>
client.send(new CreateTableCommand(config))
.catch(err => {
console.error(`Failed to create ${config.TableName}:`, err.message);
return null;
})
);
await Promise.all(promises);
// Wait before next batch
if (index < batches.length - 1) {
console.log(`Waiting ${BATCH_DELAY_MS / 1000}s before next batch...`);
await new Promise(resolve => setTimeout(resolve, BATCH_DELAY_MS));
}
}
}
// Usage
const tables = [
{ TableName: 'table1', /* ... config ... */ },
{ TableName: 'table2', /* ... config ... */ },
// ... 20 more tables
];
await createTablesInBatches(tables);Best practices:
- Limit concurrent operations to 8-9 (below the 10 limit)
- Add 30-60 second delays between batches
- Implement retry logic with exponential backoff
- Check table status before proceeding to next batch
Poll table status and wait for completion before starting new operations:
import { DynamoDBClient, DescribeTableCommand } from '@aws-sdk/client-dynamodb';
const client = new DynamoDBClient({ region: 'us-east-1' });
async function waitForTableActive(tableName, maxWaitSeconds = 300) {
const startTime = Date.now();
const maxWaitMs = maxWaitSeconds * 1000;
while (Date.now() - startTime < maxWaitMs) {
try {
const response = await client.send(
new DescribeTableCommand({ TableName: tableName })
);
const status = response.Table?.TableStatus;
console.log(`Table ${tableName} status: ${status}`);
if (status === 'ACTIVE') {
return true;
}
if (status === 'DELETING') {
throw new Error(`Table ${tableName} is being deleted`);
}
// Wait 5 seconds before next check
await new Promise(resolve => setTimeout(resolve, 5000));
} catch (err) {
if (err.name === 'ResourceNotFoundException') {
throw new Error(`Table ${tableName} not found`);
}
throw err;
}
}
throw new Error(`Timeout waiting for table ${tableName} to become ACTIVE`);
}
// Example usage in sequential deployment
async function deployTablesSequentially(tableConfigs) {
for (const config of tableConfigs) {
console.log(`Creating table: ${config.TableName}`);
await client.send(new CreateTableCommand(config));
console.log(`Waiting for ${config.TableName} to become ACTIVE...`);
await waitForTableActive(config.TableName);
console.log(`Table ${config.TableName} is ready`);
}
}Key points:
- Poll table status every 5 seconds
- Wait for ACTIVE status before creating next table
- Set reasonable timeout (5 minutes is typical)
- Handle ResourceNotFoundException for failed creations
Divide large infrastructure deployments into smaller, independent units:
Terraform modules approach:
# main.tf - Deploy in stages
module "core_tables" {
source = "./modules/core-tables"
# 8 tables maximum
}
module "analytics_tables" {
source = "./modules/analytics-tables"
depends_on = [module.core_tables]
# Another 8 tables
}
module "archive_tables" {
source = "./modules/archive-tables"
depends_on = [module.analytics_tables]
# Remaining tables
}CloudFormation nested stacks:
# master-stack.yaml
Resources:
CoreTablesStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://s3.amazonaws.com/my-bucket/core-tables.yaml
AnalyticsTablesStack:
Type: AWS::CloudFormation::Stack
DependsOn: CoreTablesStack
Properties:
TemplateURL: https://s3.amazonaws.com/my-bucket/analytics-tables.yaml
ArchiveTablesStack:
Type: AWS::CloudFormation::Stack
DependsOn: AnalyticsTablesStack
Properties:
TemplateURL: https://s3.amazonaws.com/my-bucket/archive-tables.yamlServerless Framework stages:
# Deploy in stages
serverless deploy --config serverless.core.yml
serverless deploy --config serverless.analytics.yml
serverless deploy --config serverless.archive.ymlBenefits:
- Each stack/module stays under the 10-table limit
- Easier to manage and debug deployment failures
- Can roll back individual stacks independently
- Better organization of related tables
For large-scale deployments, request quota increases through AWS Support:
# Check current service quotas
aws service-quotas list-service-quotas \
--service-code dynamodb \
--query 'Quotas[?QuotaName==`Concurrent control plane operations`]'
# Request quota increase (via AWS Console or Support ticket)
# Note: Not all quotas are adjustableSteps to request increase:
1. Open AWS Console → Service Quotas
2. Search for "DynamoDB" → "Concurrent control plane operations"
3. Click "Request quota increase"
4. Provide justification (e.g., "Need to deploy 50 tables for microservices architecture")
5. AWS typically responds within 24-48 hours
Important notes:
- The 10 concurrent operations limit may not be adjustable
- The 500 cumulative limit is typically adjustable
- Include your use case and expected growth in the request
- Consider architectural alternatives before requesting increases
Track ongoing operations to avoid hitting limits:
import { DynamoDBClient, ListTablesCommand, DescribeTableCommand } from '@aws-sdk/client-dynamodb';
const client = new DynamoDBClient({ region: 'us-east-1' });
async function countActiveOperations() {
const listResponse = await client.send(new ListTablesCommand({}));
const tableNames = listResponse.TableNames || [];
const statuses = await Promise.all(
tableNames.map(async (tableName) => {
try {
const response = await client.send(
new DescribeTableCommand({ TableName: tableName })
);
return {
name: tableName,
status: response.Table?.TableStatus,
gsiStatuses: response.Table?.GlobalSecondaryIndexes?.map(gsi => ({
name: gsi.IndexName,
status: gsi.IndexStatus
})) || []
};
} catch (err) {
return { name: tableName, status: 'ERROR', error: err.message };
}
})
);
const transitionalStates = ['CREATING', 'UPDATING', 'DELETING'];
const activeOps = statuses.filter(table =>
transitionalStates.includes(table.status) ||
table.gsiStatuses?.some(gsi => transitionalStates.includes(gsi.status))
);
console.log(`Active control plane operations: ${activeOps.length}`);
console.log('Tables in transitional states:', activeOps);
return activeOps.length;
}
// Check before starting new operations
const activeCount = await countActiveOperations();
if (activeCount >= 8) {
console.log('Too many active operations, waiting...');
await new Promise(resolve => setTimeout(resolve, 60000));
}Monitoring tips:
- Check active operations before starting new ones
- Include GSI status in monitoring (they count toward limits)
- Set up CloudWatch alarms for failed table operations
- Use AWS CloudTrail to audit CreateTable/UpdateTable/DeleteTable API calls
## Understanding DynamoDB Control Plane Limits
### Hard Limits (Per AWS Account, Per Region):
- 10 concurrent operations: Maximum simultaneous CreateTable, UpdateTable, DeleteTable requests
- 500 cumulative limit: Total tables + indexes in CREATING/UPDATING/DELETING states
- 1 table name reuse delay: Cannot reuse a table name until 10 minutes after deletion completes
### What Counts Toward the Limits:
- Each CreateTable operation = 1 concurrent operation
- Each UpdateTable operation = 1 concurrent operation
- Each DeleteTable operation = 1 concurrent operation
- Creating a table with N global secondary indexes = 1 + N operations (table + each GSI)
- Adding/removing/updating GSIs via UpdateTable = additional operations
### CloudFormation-Specific Considerations:
CloudFormation may serialize some DynamoDB operations automatically, but not all:
- If you have 20 DynamoDB tables in a template, CloudFormation may try to create them all simultaneously
- Use DependsOn explicitly or nested stacks to control parallelism
- CloudFormation rollbacks can trigger additional DeleteTable operations
### Terraform-Specific Considerations:
Terraform's default parallelism is 10, which can hit the limit:
# Reduce Terraform parallelism to stay under limit
terraform apply -parallelism=8### Serverless Framework:
The Serverless Framework translates to CloudFormation, inheriting the same limitations:
- Use serverless-plugin-split-stacks to automatically split large deployments
- Configure sequential deployment in serverless.yml:
custom:
splitStacks:
perFunction: false
perType: true
perGroupFunction: false### Global Secondary Index Limits:
- Each table can have up to 20 GSIs
- Creating a table with 20 GSIs counts as 21 operations (1 table + 20 indexes)
- Update operations that add/remove GSIs count separately
- Cannot add and remove GSIs in the same UpdateTable call
### Best Practices for Large-Scale Deployments:
1. Pre-create core tables manually before running IaC tools
2. Use separate AWS accounts for different environments (dev/staging/prod)
3. Deploy in waves: Create 8 tables, wait 2 minutes, create next 8
4. Monitor CloudWatch Events for table state changes
5. Implement idempotent deployment scripts that can resume after failures
### Workarounds for Microservices Architectures:
If you have 50+ microservices each needing DynamoDB tables:
- Single-table design: Use one table with different partition key patterns
- Shared tables: Group related microservices on shared tables
- Multi-region deployment: Deploy tables in batches across regions (limits are per-region)
- Phased rollout: Deploy tables as microservices are activated, not all at once
### Error Messages You May Encounter:
- LimitExceededException: Subscriber limit exceeded: Only 10 tables can be created, updated, or deleted simultaneously
- LimitExceededException: Subscriber limit exceeded
- LimitExceededException: Simultaneous table operations limit exceeded
- LimitExceededException: Too many operations in progress
### When to Contact AWS Support:
- You need to deploy 100+ tables and cannot use single-table design
- You're hitting the 500 cumulative limit during migrations
- Your architecture requires sustained high concurrency of control plane operations
- You need faster table creation for disaster recovery scenarios
ImportConflictException: There was a conflict when attempting to import to the table
How to fix 'ImportConflictException: There was a conflict when attempting to import to the table' in DynamoDB
ResourceNotFoundException: Requested resource not found
How to fix "ResourceNotFoundException: Requested resource not found" in DynamoDB
TrimmedDataAccessException: The requested data has been trimmed
How to fix "TrimmedDataAccessException: The requested data has been trimmed" in DynamoDB Streams
GlobalTableNotFoundException: Global Table not found
How to fix "GlobalTableNotFoundException: Global Table not found" in DynamoDB
InvalidExportTimeException: The specified ExportTime is outside of the point in time recovery window
How to fix "InvalidExportTimeException: The specified ExportTime is outside of the point in time recovery window" in DynamoDB