This error occurs when Docker Swarm cannot find any node in the cluster that matches the placement constraints specified for a service. The fix involves adding the required labels to nodes or adjusting your service constraints.
The "no suitable node (scheduling constraints not satisfied)" error in Docker Swarm indicates that the scheduler cannot find any node in your cluster that satisfies the placement constraints you've defined for your service. Placement constraints are rules that restrict which nodes a service's tasks can run on, based on node attributes like labels, roles, or hostnames. When you deploy a service with constraints like `--constraint node.labels.region==east` or `node.role==worker`, Docker Swarm's scheduler evaluates each node against these rules. If no node matches all the specified constraints, the service tasks remain in a "Pending" state, and you see this error. Unlike placement preferences (which are best-effort hints), placement constraints are hard requirements. If the constraint cannot be satisfied, the tasks will not be scheduled at all. The scheduler will continuously try to reconcile the service's desired state, and once a suitable node becomes available (e.g., after adding the required label), the tasks will be deployed automatically.
First, check what constraints are defined for your service:
docker service inspect <service-name> --prettyLook for the "Placement" section. You can also get just the constraints:
docker service inspect <service-name> --format '{{json .Spec.TaskTemplate.Placement.Constraints}}'Note down the exact constraint expressions (e.g., node.labels.region==east).
View the tasks to see the full scheduling error:
docker service ps <service-name> --no-truncThis shows all task attempts and their errors. For more detail on a specific task:
docker inspect <task-id>The error message often indicates how many nodes were evaluated and why they failed.
Check what nodes exist and their current labels:
docker node lsTo see labels on all nodes:
docker node ls --format '{{.Hostname}}: {{.Status}} {{.Availability}}'For detailed labels on a specific node:
docker node inspect <node-name> --format '{{json .Spec.Labels}}'Compare the existing labels against what your constraint requires.
If the label doesn't exist, add it to one or more nodes. Run this from a manager node:
docker node update --label-add <key>=<value> <node-name>For example, if your constraint is node.labels.region==east:
docker node update --label-add region=east worker-node-1The service should automatically reconcile and deploy tasks to the now-eligible node.
Ensure the node with matching labels is not in drain mode:
docker node lsIf the node shows "Drain" availability, update it to active:
docker node update --availability active <node-name>Nodes in drain mode will not accept new tasks, even if they match constraints.
Common typos include:
- mode.role instead of node.role
- node.label instead of node.labels (plural)
- Wrong comparison operator
Valid constraint attributes include:
- node.id - Node ID
- node.hostname - Node hostname
- node.role - manager or worker
- node.platform.os - Operating system
- node.platform.arch - CPU architecture
- node.labels.<key> - Custom node labels
If you need to fix the constraint, update the service:
docker service update --constraint-rm "wrong.constraint==value" \
--constraint-add "node.labels.correct==value" <service-name>After fixing the constraint or adding labels, verify the service:
docker service ls
docker service ps <service-name>Tasks should now show "Running" status. If still pending, check for other issues like resource constraints or port conflicts.
Multiple constraints: When you specify multiple constraints, ALL must be satisfied (logical AND). For example, --constraint node.role==worker --constraint node.labels.region==east requires nodes that are workers AND have the region=east label.
Constraint operators: Docker supports == (equals) and != (not equals). There's no support for other operators like >, <, or regex matching.
Global vs replicated services: Global services deploy one task per node that matches the constraints. If no nodes match, the service will have zero tasks. Replicated services will have pending tasks until suitable nodes exist.
Docker Compose deploy constraints: In a compose file, specify constraints under deploy.placement.constraints:
services:
web:
image: nginx
deploy:
placement:
constraints:
- node.role == worker
- node.labels.region == eastPreferences vs constraints: If you want best-effort placement hints rather than hard requirements, use preferences instead:
docker service create --placement-pref 'spread=node.labels.datacenter' nginxDebugging tip: To see why scheduling failed on each node, you may need to check the Swarm manager logs:
docker logs $(docker ps -q -f name=swarm-manager)Resource constraints: Remember that memory/CPU reservations are also checked during scheduling. A node might match label constraints but fail due to insufficient resources. Check with docker node inspect <node> --format '{{json .Description.Resources}}'.
Built-in engine labels: In addition to custom labels, you can constrain on engine labels like node.platform.os and node.platform.arch for multi-architecture clusters.
image operating system "linux" cannot be used on this platform
How to fix 'image operating system linux cannot be used on this platform' in Docker
manifest unknown: manifest unknown
How to fix 'manifest unknown' in Docker
cannot open '/etc/passwd': Permission denied
How to fix 'cannot open: Permission denied' in Docker
Error response from daemon: failed to create the ipvlan port
How to fix 'failed to create the ipvlan port' in Docker
toomanyrequests: Rate exceeded for anonymous users
How to fix 'Rate exceeded for anonymous users' in Docker Hub