This error occurs when attempting to access a Supabase Storage bucket that does not exist, has an incorrect name, or requires permissions you lack. It commonly appears during file upload, download, or bucket operations when the specified bucket cannot be found or accessed.
The "Bucket not found" error is thrown by Supabase Storage API when your application attempts to interact with a storage bucket that either doesn't exist in your project, is misspelled in your code, or is inaccessible due to Row Level Security (RLS) policies. Supabase Storage organizes files into buckets (similar to folders or containers), and each bucket must be explicitly created before use. This error can manifest in two distinct scenarios: either the bucket genuinely doesn't exist in your Supabase dashboard, or it exists but your current authentication context lacks the necessary permissions to access it. Understanding the difference is crucial for resolving the issue effectively.
Log into your Supabase project dashboard and navigate to Storage section. Check if the bucket name matches exactly (case-sensitive) with what you're using in your code.
In Supabase Dashboard:
1. Go to Storage > Buckets
2. Verify the bucket name spelling and capitalization
3. Note whether the bucket is marked as "Public" or "Private"
If the bucket doesn't exist, create it using the dashboard or programmatically.
Ensure your code uses the exact bucket name including proper capitalization:
// ❌ Wrong - case mismatch
const { data, error } = await supabase
.storage
.from('MyBucket') // Bucket is actually "mybucket"
.upload('file.png', file);
// ✅ Correct - exact match
const { data, error } = await supabase
.storage
.from('mybucket')
.upload('file.png', file);
if (error) {
console.error('Error:', error.message);
}Bucket names in Supabase are case-sensitive and must match exactly.
If the bucket exists but you're getting the error, you likely need to add a Row Level Security policy to grant access to storage.buckets:
For development/testing (allow all):
-- In Supabase SQL Editor
create policy "Allow bucket access for all users"
on storage.buckets
for select
using ( true );For production (authenticated users only):
create policy "Allow bucket access for authenticated users"
on storage.buckets
for select
to authenticated
using ( true );For specific bucket:
create policy "Allow access to specific bucket"
on storage.buckets
for select
using ( name = 'mybucket' );Note: To list or access buckets, you need a select policy on storage.buckets, not storage.objects.
If the bucket doesn't exist, create it programmatically with proper error handling:
const { data: existingBucket, error: getError } = await supabase
.storage
.getBucket('mybucket');
if (getError && getError.message.includes('not found')) {
console.log('Bucket not found, creating...');
const { data: newBucket, error: createError } = await supabase
.storage
.createBucket('mybucket', {
public: false, // Set to true for public access
fileSizeLimit: 1024 * 1024 * 10 // 10MB limit
});
if (createError) {
console.error('Failed to create bucket:', createError.message);
} else {
console.log('Bucket created successfully:', newBucket);
}
} else if (getError) {
console.error('Error checking bucket:', getError.message);
} else {
console.log('Bucket already exists:', existingBucket);
}Required permissions: insert on storage.buckets for createBucket, select for getBucket.
If you're using the service_role key with a client that picks up user sessions (like SSR), create a separate client:
// ❌ Wrong - service_role with SSR client
import { createServerClient } from '@supabase/ssr';
const supabase = createServerClient(
process.env.NEXT_PUBLIC_SUPABASE_URL,
process.env.SUPABASE_SERVICE_ROLE_KEY, // This won't work as expected
// ...
);
// ✅ Correct - separate plain client for service_role
import { createClient } from '@supabase/supabase-js';
// For service_role operations (bypasses RLS)
const supabaseAdmin = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY!,
{
auth: {
autoRefreshToken: false,
persistSession: false
}
}
);
// For user operations (respects RLS)
const supabaseUser = createServerClient(
process.env.NEXT_PUBLIC_SUPABASE_URL,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY,
// ...
);Use supabaseAdmin for operations that should bypass RLS, and supabaseUser for user-scoped operations.
If your bucket is private but your URL includes "public" as the first path segment, Supabase will interpret "public" as the bucket name:
// ❌ Wrong - "public" in path for private bucket
const url = supabase.storage
.from('mybucket')
.getPublicUrl('public/file.png'); // Interprets "public" as bucket name
// ✅ Correct - for public buckets
const url = supabase.storage
.from('mybucket') // Bucket must be marked as public in dashboard
.getPublicUrl('file.png');
// ✅ Correct - for private buckets, use signed URLs
const { data, error } = await supabase.storage
.from('mybucket')
.createSignedUrl('file.png', 60); // 60 second expiry
if (error) {
console.error('Error creating signed URL:', error.message);
} else {
console.log('Signed URL:', data.signedUrl);
}Remember: "Public" bucket only means files have public URLs; all other operations still require RLS policies.
Known Intermittent Bug: There's a documented issue (supabase/storage#748) where newly created buckets may intermittently return "Bucket not found" errors even though they exist. This is particularly common in new projects. If you encounter this, wait a few minutes and retry, or restart your Supabase services if self-hosting.
RLS Policy Hierarchy: Understanding the difference between storage.buckets and storage.objects policies is crucial. To list buckets, you need select permission on storage.buckets. To perform file operations (upload/download), you need policies on storage.objects. Many developers only set up object policies and wonder why bucket operations fail.
Self-Hosted Port Configuration: If running self-hosted Supabase and experiencing persistent bucket errors, check your docker-compose.yml storage service configuration. The internal port (default 5000) may conflict with other services. Change it to an available port like 5003, and update the corresponding entry in volumes/api/kong.yml to match.
Public vs Private Bucket Confusion: Marking a bucket as "public" in Supabase only enables public URL generation for files; it does NOT bypass RLS policies for operations like listing, uploading, or deleting files. You still need appropriate RLS policies even for public buckets.
Multi-Project Development: If working with multiple Supabase projects, ensure your environment variables point to the correct project. A common mistake is having staging bucket names in production code or vice versa. Use environment-specific configuration to avoid cross-project access attempts.
email_conflict_identity_not_deletable: Cannot delete identity because of email conflict
How to fix "Cannot delete identity because of email conflict" in Supabase
mfa_challenge_expired: MFA challenge has expired
How to fix "mfa_challenge_expired: MFA challenge has expired" in Supabase
conflict: Database conflict, usually related to concurrent requests
How to fix "database conflict usually related to concurrent requests" in Supabase
phone_exists: Phone number already exists
How to fix "phone_exists" in Supabase
StorageApiError: resource_already_exists
StorageApiError: Resource already exists