This error occurs when you try to initialize or join a Docker Swarm on a node that is already a member of an existing swarm. The fix involves leaving the current swarm first before joining or creating a new one.
The "This node is already part of a swarm" error indicates that you're attempting to run `docker swarm init` or `docker swarm join` on a machine that is already participating in a Docker Swarm cluster. Docker only allows a node to be a member of one swarm at a time. This commonly happens when: - You previously initialized a swarm and forgot about it - Your machine was rebooted but retained its swarm membership state - You're trying to join a different swarm without leaving the current one - You accidentally ran `docker swarm init` on a machine that was already a worker or manager in a swarm Docker Swarm mode stores membership information locally on each node, which persists across Docker daemon restarts and system reboots. To join a different swarm or reinitialize, you must explicitly leave the current swarm first.
First, verify that the node is indeed part of a swarm and understand its current role:
docker info | grep -A 5 "Swarm"This will show output like:
Swarm: active
NodeID: abc123xyz
Is Manager: true
ClusterID: def456uvw
Managers: 1
Nodes: 1You can also check the node's role directly:
docker node lsIf you're a manager, this shows all nodes in the swarm. If you see "Error response from daemon: This node is not a swarm manager", the node is a worker (not a manager).
If the node is a worker (not a manager), you can simply leave the swarm:
docker swarm leaveThis gracefully removes the node from the swarm. The output will confirm:
Node left the swarm.After leaving, verify the swarm status:
docker info | grep "Swarm"Should now show:
Swarm: inactiveYou can now join a new swarm or initialize a fresh one.
If the node is a manager, Docker requires the --force flag to leave:
docker swarm leave --forceWarning: If this is the last manager in the swarm, the entire swarm state (services, networks, secrets) will be lost. Make sure you understand the implications before proceeding.
For a safer approach with multiple managers:
1. First demote the manager to a worker:
# Run from another manager node
docker node demote <node-id>2. Then leave normally:
# Run from the demoted node
docker swarm leave3. Remove the node from the swarm's node list (from a remaining manager):
docker node rm <node-id>After leaving the previous swarm, you can now create a new one or join an existing one.
To initialize a new swarm:
docker swarm initOr specify an advertise address for multi-network hosts:
docker swarm init --advertise-addr <IP-ADDRESS>To join an existing swarm as a worker:
docker swarm join --token <WORKER-TOKEN> <MANAGER-IP>:2377Get the join token from a manager node:
docker swarm join-token workerTo join as a manager:
docker swarm join --token <MANAGER-TOKEN> <MANAGER-IP>:2377Get the manager token from an existing manager:
docker swarm join-token managerIf you're trying to recover a swarm after losing quorum (majority of managers), use the --force-new-cluster flag:
docker swarm init --force-new-clusterThis reinitializes the swarm on the current node while preserving existing services, networks, and other swarm objects. Use this when:
- Other manager nodes are permanently lost
- The swarm lost quorum and can't be recovered normally
- You need to rebuild the swarm from a single remaining manager
Important considerations:
- Run this only on a node that was previously a manager
- All other nodes will need to rejoin the swarm
- Worker nodes will be marked as "down" until they rejoin
- This is a recovery operation, not for routine use
Docker Python SDK force_new_cluster: If you're using the Docker Python SDK (docker-py), you can use the force_new_cluster parameter:
import docker
client = docker.from_env()
client.swarm.init(force_new_cluster=True)Swarm data location: Docker stores swarm state in /var/lib/docker/swarm/ on Linux. In extreme cases where docker swarm leave --force doesn't work, you can stop Docker, remove this directory, and restart Docker:
sudo systemctl stop docker
sudo rm -rf /var/lib/docker/swarm
sudo systemctl start dockerWarning: This is a last resort and will cause data loss.
Docker Desktop considerations: On Docker Desktop (Mac/Windows), swarm state persists in the Docker Desktop VM. Restarting Docker Desktop or resetting to factory defaults will clear swarm state.
Kubernetes vs Swarm: If you're using Docker Desktop with Kubernetes enabled, be aware that Kubernetes and Swarm are separate orchestrators. Enabling Kubernetes doesn't affect swarm membership.
CI/CD environments: In CI/CD pipelines, always ensure you clean up swarm state between runs if you're testing swarm functionality:
docker swarm leave --force 2>/dev/null || truedockerfile parse error line 5: unknown instruction: RRUN
How to fix 'unknown instruction' Dockerfile parse error in Docker
Error response from daemon: manifest for nginx:nonexistent not found: manifest unknown: manifest unknown
How to fix 'manifest for image:tag not found' in Docker
Error response from daemon: invalid reference format: repository name must be lowercase
How to fix 'repository name must be lowercase' in Docker
Error response from daemon: No such image
How to fix 'No such image' in Docker
Error response from daemon: Container is not running
How to fix 'Container is not running' when using docker exec