The 'No such node' error occurs when Docker Swarm cannot find a node matching the ID or hostname you specified. This typically happens when a node has left the swarm, was removed, or the identifier is incorrect. Use docker node ls to verify available nodes.
The "Error response from daemon: No such node" message in Docker Swarm indicates that the swarm manager cannot locate a node with the ID or hostname you provided in your command. Docker Swarm maintains a registry of all nodes (both managers and workers) that have joined the cluster. When you run commands like `docker node inspect`, `docker node rm`, `docker node update`, or `docker node promote`, Docker searches for a node matching your identifier. If no match is found, it returns this error. This error commonly appears when: - A node was forcibly removed or left the swarm - The swarm was reinitialized with `--force-new-cluster` - You're using an outdated node ID from before a swarm rebuild - There's a typo in the node identifier
First, check what nodes are currently in your swarm:
# List all nodes in the swarm (run on a manager)
docker node ls
# Output includes ID, hostname, status, availability, and manager status
# ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
# abc123def456 * manager1 Ready Active Leader
# xyz789ghi012 worker1 Ready Active
# Get detailed node information
docker node ls --format "table {{.ID}}\t{{.Hostname}}\t{{.Status}}\t{{.ManagerStatus}}"Note: You must run node commands on a swarm manager node. Worker nodes will return "This node is not a swarm manager."
Docker Swarm accepts either the node ID or hostname:
# Get full node IDs
docker node ls --format "{{.ID}}: {{.Hostname}}"
# You can use the full ID
docker node inspect abc123def456ghijklmnopqrs
# Or use a unique prefix (at least 4 characters)
docker node inspect abc1
# Or use the hostname
docker node inspect worker1Common mistakes:
- Using the container ID instead of node ID
- Confusing hostnames between different swarms
- Copy-pasting node IDs that include invisible characters
If a node has left the swarm, it won't appear in docker node ls but may still have stale references:
# Check if node appears as "Down" (not yet removed)
docker node ls -f "name=worker1"
# If status is "Down", you can remove it
docker node rm worker1
# Force remove a node that's unreachable
docker node rm --force worker1
# If the node already left and isn't listed, the error is expected
# - the node is already gone from the swarmA node that gracefully left with docker swarm leave may need to be explicitly removed from the manager's node list.
Services may have constraints referencing nodes that no longer exist:
# List services and their constraints
docker service ls
docker service inspect --format '{{.Spec.TaskTemplate.Placement.Constraints}}' myservice
# If a constraint references a removed node, update the service
docker service update --constraint-rm "node.hostname==oldworker" myservice
# Or update to a valid node
docker service update --constraint-add "node.hostname==newworker" myservice
# Check tasks that may be pending due to missing nodes
docker service ps myserviceServices with node constraints pointing to non-existent nodes will fail to schedule tasks.
After reinitializing a swarm with --force-new-cluster, old node IDs become invalid:
# If you see repeated "Error getting node: node not found" in logs
# Check for stacks that may reference old nodes
docker stack ls
docker stack ps mystask
# Remove and redeploy stacks to clear stale state
docker stack rm mystack
docker stack deploy -c docker-compose.yml mystack
# The stale node references should clear after removing affected stacksWhen using --force-new-cluster, all previous worker nodes are removed from the swarm and must rejoin.
If a node needs to rejoin the swarm, get a new join token:
# On a manager, get the worker join token
docker swarm join-token worker
# Or get the manager join token
docker swarm join-token manager
# On the node that needs to rejoin, first leave any old swarm
docker swarm leave --force
# Then join with the new token
docker swarm join --token SWMTKN-1-xxxxx manager-ip:2377After rejoining, the node will have a new ID. Update any scripts or configurations that referenced the old ID.
Ensure you're connected to the correct swarm manager:
# Verify swarm status on current node
docker info --format '{{.Swarm.LocalNodeState}}'
# Should return "active" if part of a swarm
# Check which node you're on
docker info --format '{{.Swarm.NodeID}}'
# Verify manager status
docker info --format '{{.Swarm.ControlAvailable}}'
# Returns "true" if this is a manager
# If using Docker contexts, ensure correct context
docker context ls
docker context use defaultIf you have multiple swarms or Docker contexts, ensure you're targeting the correct one.
### Node IDs and Swarm Reinitializations
When a swarm is reinitialized (using docker swarm init --force-new-cluster), all existing node memberships are invalidated:
# Before reinitialization - IDs like abc123
docker node ls
# After reinitialization - new IDs generated
# Old node IDs will return "No such node"
# All workers must rejoin with new tokensIf you frequently reinitialize swarms, consider using automation that discovers current node IDs rather than hardcoding them.
### Raft Consensus and Node State
Swarm managers maintain node state through Raft consensus. When a node disappears:
# Node states in Raft:
# - Ready: Node is healthy and accepting tasks
# - Down: Node hasn't responded to health checks
# - Disconnected: Node is unreachable but not yet timed out
# View manager Raft status
docker node ls -f "role=manager"
# Check if enough managers are available
# You need (N/2)+1 managers for quorum
# 3 managers -> need 2 available
# 5 managers -> need 3 availableIf you lose quorum, you may need --force-new-cluster to recover, which will invalidate all node IDs.
### Node Labels and Constraints
Node labels survive node restarts but not swarm reinitialization:
# Add labels to nodes
docker node update --label-add env=production worker1
# View node labels
docker node inspect --format '{{.Spec.Labels}}' worker1
# If node is removed and rejoins, labels are lost
# Re-add labels after node rejoinsStore node label configurations in scripts or configuration management tools.
### Debugging Persistent "Node Not Found" Errors
If your manager logs continuously show "node not found" errors:
# Check Docker daemon logs
journalctl -u docker.service -f | grep "node.*not found"
# Common causes:
# 1. Stale tasks referencing removed nodes
# 2. Services with constraints for non-existent nodes
# 3. Incomplete stack removal
# List all tasks across all services
docker service ls -q | xargs -I {} docker service ps {} --filter "desired-state=running"
# Look for tasks stuck in "Pending" state
docker service ps myservice --filter "desired-state=running" --format "{{.Node}} {{.CurrentState}}"### Swarm Node Cleanup Best Practices
# Graceful node removal process:
# 1. Drain the node first (reschedules tasks)
docker node update --availability drain worker1
# 2. Wait for tasks to migrate
docker node ps worker1
# 3. On the worker, leave the swarm
docker swarm leave
# 4. On the manager, remove the node entry
docker node rm worker1
# This prevents "No such node" errors from orphaned references### Multi-Manager Scenarios
In multi-manager setups, ensure you're running commands on the leader or a reachable manager:
# Find the leader
docker node ls -f "role=manager" --format "{{.Hostname}}: {{.ManagerStatus}}"
# If the current manager is "Reachable" but not "Leader",
# commands may fail if there's a quorum issue
# Check swarm join status
docker info | grep -A 10 "Swarm:"Commands will fail with various errors including "No such node" if the Raft consensus is disrupted.
dockerfile parse error line 5: unknown instruction: RRUN
How to fix 'unknown instruction' Dockerfile parse error in Docker
Error response from daemon: manifest for nginx:nonexistent not found: manifest unknown: manifest unknown
How to fix 'manifest for image:tag not found' in Docker
Error response from daemon: invalid reference format: repository name must be lowercase
How to fix 'repository name must be lowercase' in Docker
Error response from daemon: No such image
How to fix 'No such image' in Docker
Error response from daemon: Container is not running
How to fix 'Container is not running' when using docker exec