This error occurs when AWS DynamoDB encounters conflicts while exporting table data to Amazon S3. Conflicts typically arise from concurrent export jobs, S3 bucket permission issues, or existing files in the target location. The export operation fails, preventing data backup or migration.
The ExportConflictException is thrown by AWS DynamoDB when there's a conflict during export operations to Amazon S3. This error occurs when DynamoDB cannot safely write exported data to the specified S3 bucket due to various conflicts. DynamoDB export operations create data files in S3 containing your table's data in formats like DynamoDB JSON or Amazon Ion. Conflicts can arise from: 1. **Concurrent exports**: Multiple export jobs targeting the same S3 prefix or overlapping file names 2. **S3 permissions**: Insufficient write permissions or bucket policies blocking DynamoDB 3. **Existing files**: Target S3 location already contains files that would be overwritten 4. **Bucket configuration**: S3 bucket settings incompatible with DynamoDB export requirements 5. **Resource constraints**: S3 bucket limits or throttling affecting write operations This error indicates that DynamoDB cannot complete the export operation without risking data corruption or permission violations. The export job will fail, and you'll need to resolve the S3 conflict before retrying.
First, examine the failed export job to understand the specific conflict:
# Using AWS CLI
aws dynamodb describe-export --export-arn arn:aws:dynamodb:region:account:table/table-name/export/export-id
# Check the S3 bucket and prefix
aws s3 ls s3://your-bucket/export-prefix/ --recursive
# Check CloudWatch logs for the export job
aws logs get-log-events --log-group-name /aws/dynamodb/exports --log-stream-name export-job-id --limit 100Look for specific error messages about:
- S3 permission denied errors
- File already exists conflicts
- Concurrent operation details
- Bucket policy violations
Ensure the S3 bucket allows DynamoDB to write files:
# Check bucket policy
aws s3api get-bucket-policy --bucket your-bucket --query Policy --output text | jq .
# Check bucket ACL
aws s3api get-bucket-acl --bucket your-bucket
# Example bucket policy allowing DynamoDB exports:
# {
# "Version": "2012-10-17",
# "Statement": [
# {
# "Effect": "Allow",
# "Principal": {
# "Service": "dynamodb.amazonaws.com"
# },
# "Action": [
# "s3:PutObject",
# "s3:GetObject",
# "s3:DeleteObject"
# ],
# "Resource": [
# "arn:aws:s3:::your-bucket/export-prefix/*"
# ],
# "Condition": {
# "StringEquals": {
# "aws:SourceAccount": "your-account-id",
# "aws:SourceArn": "arn:aws:dynamodb:region:account:table/table-name"
# }
# }
# }
# ]
# }
# Apply updated bucket policy
aws s3api put-bucket-policy --bucket your-bucket --policy file://bucket-policy.jsonRequired permissions: DynamoDB needs s3:PutObject, s3:GetObject, and s3:DeleteObject on the export prefix.
Check for and manage concurrent export jobs:
# List active export jobs for the table
aws dynamodb list-exports --table-arn arn:aws:dynamodb:region:account:table/table-name --query 'ExportSummaries[?Status=="IN_PROGRESS" || Status=="FAILED"]'
# Cancel conflicting export jobs if needed
aws dynamodb cancel-export --export-arn arn:aws:dynamodb:region:account:table/table-name/export/conflicting-export-id
# Use unique S3 prefixes for each export
export_prefix="exports/$(date +%Y-%m-%d-%H-%M-%S)"
echo "Using prefix: $export_prefix"
# Start new export with unique prefix
aws dynamodb export-table-to-point-in-time --table-arn arn:aws:dynamodb:region:account:table/table-name --s3-bucket your-bucket --s3-prefix "$export_prefix" --export-format DYNAMODB_JSON --export-time "$(date -u +%Y-%m-%dT%H:%M:%SZ)"Best practice: Use timestamp-based prefixes to avoid conflicts.
Handle existing files in the target S3 location:
# Check for existing files in the export prefix
aws s3 ls s3://your-bucket/export-prefix/ --recursive --human-readable
# Remove conflicting files (if safe)
aws s3 rm s3://your-bucket/export-prefix/ --recursive
# Or move existing files to archive
aws s3 mv s3://your-bucket/export-prefix/ s3://your-bucket/archive/export-prefix-$(date +%Y%m%d)/ --recursive
# Use a new, clean prefix for the export
clean_prefix="exports/clean-$(date +%s)"
aws dynamodb export-table-to-point-in-time --table-arn arn:aws:dynamodb:region:account:table/table-name --s3-bucket your-bucket --s3-prefix "$clean_prefix" --export-format DYNAMODB_JSONWarning: Only delete files if they're from failed exports or no longer needed.
Verify S3 bucket settings don't conflict with exports:
# Check bucket encryption settings
aws s3api get-bucket-encryption --bucket your-bucket
# Check bucket versioning status
aws s3api get-bucket-versioning --bucket your-bucket
# Check bucket lifecycle rules
aws s3api get-bucket-lifecycle-configuration --bucket your-bucket 2>/dev/null || echo "No lifecycle configuration"
# Check available storage
aws cloudwatch get-metric-statistics --namespace AWS/S3 --metric-name BucketSizeBytes --dimensions Name=BucketName,Value=your-bucket Name=StorageType,Value=StandardStorage --start-time $(date -d '1 hour ago' +%s) --end-time $(date +%s) --period 300 --statistics Average
# Check request rates
aws cloudwatch get-metric-statistics --namespace AWS/S3 --metric-name AllRequests --dimensions Name=BucketName,Value=your-bucket --start-time $(date -d '1 hour ago' +%s) --end-time $(date +%s) --period 300 --statistics SumCommon issues: S3 versioning can cause conflicts, encryption requirements may block DynamoDB, or bucket may be at capacity.
Set up export with conflict resolution:
# Create export with explicit settings
aws dynamodb export-table-to-point-in-time --table-arn arn:aws:dynamodb:region:account:table/table-name --s3-bucket your-bucket --s3-prefix "exports/$(date +%Y-%m-%d-%H-%M-%S)" --s3-bucket-owner-full-control --export-format DYNAMODB_JSON --export-type FULL_EXPORT --incremental-export-specification "{
"ExportFromTime": "$(date -u -d '1 hour ago' +%Y-%m-%dT%H:%M:%SZ)",
"ExportToTime": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
"ExportViewType": "NEW_AND_OLD_IMAGES"
}" --client-token "export-$(date +%s)"
# Monitor export progress
aws dynamodb describe-export --export-arn arn:aws:dynamodb:region:account:table/table-name/export/new-export-id --query 'ExportDescription.{Status:ExportStatus,StartTime:StartTime,ProcessedSize:ProcessedSize,ExportedItemCount:ExportedItemCount}'
# Verify export completed successfully
aws s3 ls s3://your-bucket/exports/ --recursive --summarizeKey settings: Use unique client-token for idempotency, specify bucket owner control, and use timestamp-based prefixes.
### Understanding Export Conflict Scenarios
1. Concurrent Export Conflicts:
DynamoDB exports are designed to be isolated, but when multiple exports target the same S3 prefix, file naming conflicts can occur. Each export creates manifest files and data files with predictable names, causing collisions.
2. S3 Permission Model:
DynamoDB exports require specific S3 permissions. The service must be able to:
- Write objects to the specified prefix
- Read objects it creates (for verification)
- Delete objects (for cleanup of failed exports)
- The bucket policy must explicitly allow dynamodb.amazonaws.com service principal
3. S3 Object Lock and Governance:
If S3 Object Lock is enabled with governance or compliance mode, DynamoDB cannot modify or delete exported files, causing conflicts on retry attempts.
4. Cross-Account Export Complexities:
Exporting to S3 buckets in different AWS accounts requires:
- Bucket policy allowing DynamoDB from source account
- IAM role assumption with proper permissions
- S3 bucket owner full control settings
### Best Practices for Reliable Exports
Pre-export Checklist:
1. Unique prefixes: Always use timestamp or UUID-based S3 prefixes
2. Permission validation: Test S3 write permissions before starting export
3. Capacity planning: Ensure S3 bucket has sufficient storage
4. Conflict monitoring: Set up CloudWatch alarms for export failures
Export Strategies:
Incremental Exports: For large tables, use incremental exports to avoid long-running jobs
aws dynamodb export-table-to-point-in-time --table-arn $TABLE_ARN --incremental-export-specification '{
"ExportFromTime": "2024-01-01T00:00:00Z",
"ExportToTime": "2024-01-02T00:00:00Z",
"ExportViewType": "NEW_AND_OLD_IMAGES"
}'Parallel Exports: For very large tables, split by time ranges
# Export first half
aws dynamodb export-table-to-point-in-time --table-arn $TABLE_ARN --export-time "2024-01-15T12:00:00Z" --s3-prefix "exports/part1"
# Export second half
aws dynamodb export-table-to-point-in-time --table-arn $TABLE_ARN --export-time "2024-01-31T23:59:59Z" --s3-prefix "exports/part2"Automated Export Pipeline:
import boto3
import time
from datetime import datetime, timedelta
def safe_dynamodb_export(table_arn, bucket, prefix):
"""Export with conflict handling"""
dynamodb = boto3.client('dynamodb')
s3 = boto3.client('s3')
# Generate unique prefix
unique_prefix = f"{prefix}/{datetime.utcnow().strftime('%Y%m%d_%H%M%S')}"
# Check S3 permissions
try:
s3.put_object(Bucket=bucket, Key=f"{unique_prefix}/test.txt", Body=b'test')
s3.delete_object(Bucket=bucket, Key=f"{unique_prefix}/test.txt")
except Exception as e:
return {"error": f"S3 permission test failed: {str(e)}"}
# Start export with retry logic
for attempt in range(3):
try:
response = dynamodb.export_table_to_point_in_time(
TableArn=table_arn,
S3Bucket=bucket,
S3Prefix=unique_prefix,
ExportFormat='DYNAMODB_JSON',
ClientToken=f'export-{int(time.time())}'
)
return {"success": True, "export_arn": response['ExportDescription']['ExportArn']}
except dynamodb.exceptions.ExportConflictException:
if attempt < 2:
time.sleep(2 ** attempt) # Exponential backoff
continue
return {"error": "Export conflict after multiple retries"}
return {"error": "Max retries exceeded"}Monitoring and Alerting:
# CloudWatch alarm for export failures
aws cloudwatch put-metric-alarm --alarm-name "DynamoDB-Export-Failures" --metric-name "FailedExports" --namespace "AWS/DynamoDB" --statistic "Sum" --period 300 --evaluation-periods 1 --threshold 1 --comparison-operator "GreaterThanOrEqualToThreshold" --alarm-actions "arn:aws:sns:region:account:Export-Alerts"Cost Optimization:
- Use S3 Intelligent-Tiering for exported data
- Enable S3 lifecycle policies to archive old exports
- Compress exports (DynamoDB JSON is already compressed)
- Delete failed export artifacts to avoid storage costs
Troubleshooting Cross-Account Exports:
1. Bucket Policy (in destination account):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "dynamodb.amazonaws.com"
},
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::destination-bucket/exports/*",
"Condition": {
"StringEquals": {
"aws:SourceAccount": "source-account-id",
"aws:SourceArn": "arn:aws:dynamodb:region:source-account:table/source-table"
}
}
}
]
}2. IAM Role (in source account):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::destination-account:role/CrossAccountExportRole"
}
]
}ImportConflictException: There was a conflict when attempting to import to the table
How to fix 'ImportConflictException: There was a conflict when attempting to import to the table' in DynamoDB
ResourceNotFoundException: Requested resource not found
How to fix "ResourceNotFoundException: Requested resource not found" in DynamoDB
TrimmedDataAccessException: The requested data has been trimmed
How to fix "TrimmedDataAccessException: The requested data has been trimmed" in DynamoDB Streams
GlobalTableNotFoundException: Global Table not found
How to fix "GlobalTableNotFoundException: Global Table not found" in DynamoDB
InvalidExportTimeException: The specified ExportTime is outside of the point in time recovery window
How to fix "InvalidExportTimeException: The specified ExportTime is outside of the point in time recovery window" in DynamoDB