SSH 'Host key has changed' error occurs when the remote server's SSH key fingerprint no longer matches what's stored locally. This is a security feature designed to prevent man-in-the-middle attacks. Common causes include server reinstallation, IP reassignment, DNS changes, or legitimate key rotation.
The "Host key for hostname has changed and you have requested strict checking" error is a critical SSH security feature that prevents man-in-the-middle (MITM) attacks. Here's what it means: SSH stores the fingerprint of every server's public key in your local `~/.ssh/known_hosts` file. When you connect to a server, SSH verifies that the server's current key matches the stored fingerprint. If they don't match, SSH blocks the connection and displays this warning. This error indicates one of several conditions: - **Server reinstalled or SSH service restarted**: The server generated a new key pair - **IP address reassigned**: A different server is now using that IP or hostname - **DNS misconfiguration**: The hostname now points to a different server - **Legitimate key rotation**: The server administrator intentionally rotated keys - **Potential security threat**: An attacker might be intercepting your connection SSH's StrictHostKeyChecking is ON by default, which means it will reject connections with changed keys rather than silently accepting them. This is the secure default behavior.
Before removing the old key, confirm that the server change is expected and authorized:
# The error message shows you the offending key line:
# Example:
# Offending ED25519 key in /home/user/.ssh/known_hosts:42
# Remove with:
# ssh-keygen -f "/home/user/.ssh/known_hosts" -R "example.com"
# DO NOT blindly remove keys without verifying:
# 1. Contact your server administrator
# 2. Confirm that the server was intentionally rebuilt/migrated
# 3. Verify the new key fingerprint matches what the admin expectsSECURITY REMINDER: Removing SSH keys without verification could allow man-in-the-middle attacks. Only proceed if you have confirmed the change is legitimate.
Once you've confirmed the change is legitimate, remove the offending key using ssh-keygen:
# Remove by hostname:
ssh-keygen -f ~/.ssh/known_hosts -R "example.com"
# Or remove by IP address:
ssh-keygen -f ~/.ssh/known_hosts -R "192.168.1.100"
# Remove both hostname and IP (more reliable):
ssh-keygen -f ~/.ssh/known_hosts -R "example.com,192.168.1.100"
# The command output will show:
# # Host example.com found and removedThis command removes the old key entry from your known_hosts file, allowing SSH to accept the new key on the next connection.
After removing the old key, reconnect to the server. SSH will prompt you to accept the new key:
SSH will display something like:
The authenticity of host 'example.com (192.168.1.100)' can't be established.
ED25519 key fingerprint is SHA256:abcdef1234567890...
Are you sure you want to continue connecting (yes/no/[fingerprint])?Type yes to accept the new key and add it to known_hosts:
# This adds the new key fingerprint to ~/.ssh/known_hosts
# Future connections will not prompt you againFor high-security environments, verify the new key fingerprint with your server administrator before accepting:
# Get the server's SSH key fingerprint without connecting:
ssh-keyscan example.com 2>/dev/null | ssh-keygen -l -f - -E sha256
# Expected output:
# 2048 SHA256:abcdef1234567890... example.com (RSA)
# 256 SHA256:1234567890abcdef... example.com (ED25519)
# Compare this with what your administrator provided
# Only accept it if the fingerprints match exactlyThis approach avoids blindly accepting a key. Always verify critical infrastructure keys this way.
If multiple servers changed keys (e.g., after infrastructure migration), remove them all:
# Remove multiple hosts at once:
ssh-keygen -f ~/.ssh/known_hosts -R "server1.example.com"
ssh-keygen -f ~/.ssh/known_hosts -R "server2.example.com"
ssh-keygen -f ~/.ssh/known_hosts -R "server3.example.com"
# Or, if you're rebuilding everything, clear the entire known_hosts file
# (careful - removes ALL known servers!):
rm ~/.ssh/known_hosts
touch ~/.ssh/known_hosts
chmod 644 ~/.ssh/known_hosts
# Then reconnect to all servers to rebuild the file
ssh [email protected] "exit"
ssh [email protected] "exit"
# etc...For production systems, track which servers need updating and update them methodically.
In automated environments (CI/CD, scripts), you may need to disable strict checking temporarily:
# Option 1: Accept any host key for a single command:
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null [email protected] 'command'
# Option 2: Accept new keys but reject changed keys:
ssh -o StrictHostKeyChecking=accept-new [email protected] 'command'
# Option 3: Set in ~/.ssh/config for specific hosts:
Host temp-server
HostName example.com
User deploy
StrictHostKeyChecking accept-new
UserKnownHostsFile ~/.ssh/known_hosts_tempSECURITY WARNING: StrictHostKeyChecking=no is vulnerable to MITM attacks. Only use:
- In isolated/trusted networks
- For temporary/ephemeral servers
- In combination with other security measures
- With explicit team approval
Prefer accept-new instead, which accepts new keys but still rejects changed keys.
If you manage many servers and some have changed, organize your connections in ~/.ssh/config:
# ~/.ssh/config
Host prod-db-new
HostName 192.168.1.50
User deploy
IdentityFile ~/.ssh/deploy_key
# New key accepted on first connection
Host staging-app-migrated
HostName staging.example.com
User deploy
StrictHostKeyChecking accept-new
UserKnownHostsFile ~/.ssh/known_hosts
Host legacy-old-key
HostName legacy.example.com
User admin
# Will fail if key changed - requires manual updateThen connect without remembering all options:
ssh prod-db-new
ssh staging-app-migratedAutomation frameworks like Ansible, Terraform, and SaltStack may also encounter this error:
# Ansible - disable host key checking in ansible.cfg:
[defaults]
host_key_checking = False
# Or set environment variable:
export ANSIBLE_HOST_KEY_CHECKING=False
# Terraform - add to provisioner:
provisioner "remote-exec" {
inline = ["echo 'connected'"]
connection {
type = "ssh"
user = "deploy"
private_key = file("~/.ssh/deploy_key")
host = self.public_ip
# For Terraform, strict checking is usually off by default
}
}
# For SSH-based provisioners, pre-populate known_hosts:
provisioner "remote-exec" {
inline = [
"ssh-keyscan -H ${self.public_ip} >> ~/.ssh/known_hosts"
]
}For CI/CD pipelines, pre-fetch all server keys at the start of the job:
# GitHub Actions / GitLab CI
ssh-keyscan -H prod.example.com >> ~/.ssh/known_hosts
ssh-keyscan -H staging.example.com >> ~/.ssh/known_hostsUnderstanding SSH host key verification:
SSH stores server host keys to verify that you're connecting to the same server each time. This prevents attackers from intercepting your connection (MITM attack).
When StrictHostKeyChecking is enabled (the default and secure setting):
- New servers: SSH asks you to accept their key (one-time prompt)
- Changed keys: SSH rejects the connection immediately without prompting
- Same keys: SSH silently accepts the connection
This is the opposite of "accept everything" (which is vulnerable) but also of "no prompts ever" (which risks security).
Why servers change keys:
- Server rebuild: New OS install generates new keys
- Container/VM cloning: Cloned systems get identical keys initially, then regenerate on boot
- Planned rotation: Administrators rotate keys for security policy
- Hardware replacement: New hardware gets new keys
- Cloud infrastructure: Auto-scaling or rolling updates create new instances
The fingerprint verification:
Each SSH key has a unique fingerprint. When you see:
The authenticity of host 'example.com' can't be established.
ED25519 key fingerprint is SHA256:abcdef1234567890...You can verify this matches what your administrator expects. Most modern environments provide fingerprints in:
- Server provisioning templates
- Cloud provider dashboards
- Administrative documentation
- Configuration management systems
For CI/CD and automation:
Use ssh-keyscan to automatically accept new keys in CI/CD without disabling security:
# Fetch the server's key and add to known_hosts
ssh-keyscan -t rsa,ed25519 example.com >> ~/.ssh/known_hosts
# Then subsequent SSH commands will work without prompting
ssh [email protected] 'deploy.sh'This approach is safer than disabling StrictHostKeyChecking completely.
Detecting man-in-the-middle attacks:
The "Host key has changed" error can indicate an MITM attack. Be suspicious if:
- You didn't expect the server to rebuild
- The change happened out of maintenance window
- Multiple servers changed keys simultaneously without coordination
- Your network administrator wasn't informed
- The new key fingerprint doesn't match expected values
In these cases, investigate before blindly removing the old key.
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
How to fix SSH man-in-the-middle attack warning in SSH
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
How to fix "WARNING: UNPROTECTED PRIVATE KEY FILE!" in SSH
sign_and_send_pubkey: no mutual signature supported
How to fix "sign_and_send_pubkey: no mutual signature supported" in SSH
Bad owner or permissions on /home/user/.ssh/known_hosts
How to fix "Bad owner or permissions on known_hosts" in SSH
It is required that your private key files are NOT accessible by others.
How to fix "private key files are NOT accessible by others" in SSH