This error occurs when Docker cannot find the NVIDIA container runtime. The fix requires installing the NVIDIA Container Toolkit and configuring Docker to recognize the nvidia runtime.
When you run a Docker container with `--runtime=nvidia` or `--gpus` flag, Docker needs a specialized runtime to provide GPU access inside the container. The "Unknown runtime specified nvidia" error means Docker's daemon configuration doesn't have the NVIDIA runtime registered. The NVIDIA Container Toolkit (formerly nvidia-docker2) provides this runtime by installing `nvidia-container-runtime` and registering it with Docker. Without this toolkit properly installed and configured, Docker has no way to pass through GPU devices to containers. This error commonly appears after a fresh system setup, when NVIDIA drivers are updated, after Docker reinstallation, or when using Docker Snap packages which have different configuration paths than the standard Docker installation.
First, confirm your NVIDIA drivers are properly installed on the host:
nvidia-smiYou should see your GPU model and driver version. If this command fails, you need to install NVIDIA drivers before proceeding.
Install the NVIDIA Container Toolkit which provides the nvidia runtime:
Ubuntu/Debian:
# Add NVIDIA package repository
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkitRHEL/CentOS/Fedora:
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo | \
sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
sudo yum install -y nvidia-container-toolkitUse the nvidia-ctk tool to automatically configure Docker:
sudo nvidia-ctk runtime configure --runtime=dockerThis command modifies /etc/docker/daemon.json to register the nvidia runtime. You can verify the configuration:
cat /etc/docker/daemon.jsonYou should see something like:
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}Restart Docker to apply the configuration changes:
sudo systemctl restart dockerOr if not using systemd:
sudo service docker restartVerify the fix by running a container with GPU access:
Using --runtime flag (legacy method):
docker run --rm --runtime=nvidia nvidia/cuda:12.0-base nvidia-smiUsing --gpus flag (Docker 19.03+, recommended):
docker run --rm --gpus all nvidia/cuda:12.0-base nvidia-smiBoth commands should display your GPU information inside the container.
If the nvidia-ctk tool doesn't work, manually edit the Docker daemon configuration:
sudo nano /etc/docker/daemon.jsonAdd or merge this content:
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
}If the file already has content, merge the "runtimes" section with existing settings. Then restart Docker:
sudo systemctl daemon-reload
sudo systemctl restart docker### Docker Snap Installation
If you installed Docker via Snap, the configuration file is in a different location:
# Snap Docker uses this config path
/var/snap/docker/current/config/daemon.jsonEdit this file instead of /etc/docker/daemon.json:
sudo nano /var/snap/docker/current/config/daemon.json### Docker Desktop on Windows/WSL
On Windows with Docker Desktop, you cannot edit /etc/docker/daemon.json in WSL directly. Instead:
1. Open Docker Desktop
2. Go to Settings > Docker Engine
3. Add the runtime configuration to the JSON editor
4. Click "Apply & Restart"
### Using --gpus Instead of --runtime
Docker 19.03+ introduced the --gpus flag which is now the recommended approach. It doesn't require the --runtime=nvidia flag:
# Access all GPUs
docker run --gpus all nvidia/cuda:12.0-base nvidia-smi
# Access specific GPUs
docker run --gpus '"device=0,1"' nvidia/cuda:12.0-base nvidia-smi
# Access a specific number of GPUs
docker run --gpus 2 nvidia/cuda:12.0-base nvidia-smi### Docker Compose Configuration
For Docker Compose, use format 2.3+ with the runtime directive, or format 3.x with deploy.resources:
Compose 2.3+ (runtime directive):
version: "2.3"
services:
gpu-app:
image: nvidia/cuda:12.0-base
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=allCompose 3.x (deploy resources):
version: "3.8"
services:
gpu-app:
image: nvidia/cuda:12.0-base
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]### Setting NVIDIA as Default Runtime
To avoid specifying the runtime for every container, set nvidia as the default:
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}### Nouveau Driver Conflicts
If you're on Linux and seeing issues even after configuration, the nouveau open-source driver might be conflicting with the NVIDIA proprietary driver. Blacklist nouveau:
sudo bash -c "echo 'blacklist nouveau' >> /etc/modprobe.d/blacklist-nouveau.conf"
sudo bash -c "echo 'options nouveau modeset=0' >> /etc/modprobe.d/blacklist-nouveau.conf"
sudo update-initramfs -u
sudo rebootimage operating system "linux" cannot be used on this platform
How to fix 'image operating system linux cannot be used on this platform' in Docker
manifest unknown: manifest unknown
How to fix 'manifest unknown' in Docker
cannot open '/etc/passwd': Permission denied
How to fix 'cannot open: Permission denied' in Docker
Error response from daemon: failed to create the ipvlan port
How to fix 'failed to create the ipvlan port' in Docker
toomanyrequests: Rate exceeded for anonymous users
How to fix 'Rate exceeded for anonymous users' in Docker Hub