This error occurs when Kubernetes certificates have expired, preventing TLS connections between kubectl and the API server or between cluster components. Certificate renewal is required to restore cluster access.
This error occurs when Kubernetes is unable to establish a TLS connection with the API server because one or more certificates in the cluster have expired. Kubernetes uses X.509 certificates for authentication and encryption between the kubectl client and the API server, between control plane components (API server, controller manager, scheduler), and between kubelet and the API server. When these certificates exceed their validity period (typically 1 year for kubeadm-created clusters), the cryptographic validation fails, blocking all cluster communication. The error specifically indicates that the TLS handshake cannot complete because the server's certificate (or client certificate) has passed its 'Not After' validity date. This typically affects kubeadm-managed clusters where automatic renewal hasn't occurred, or when cluster upgrades haven't been performed within the 1-year window.
System clock skew can make valid certificates appear expired. Verify time synchronization:
date
timedatectl status
chronyc tracking # on chrony-based systemsIf clocks are out of sync by >5 minutes, restart the time sync service:
sudo systemctl restart chronyd # or ntpd
sudo timedatectl set-ntp trueOn a control plane node, use kubeadm to inspect all certificates:
sudo kubeadm certs check-expirationTo manually inspect a specific kubeconfig certificate:
cat ~/.kube/config | grep client-certificate-data | cut -f2 -d : | tr -d ' ' | base64 -d | openssl x509 -text -out - | grep 'Not After'This decodes the embedded certificate and shows the exact expiration timestamp.
Before renewing certificates, back up the PKI directory on all control plane nodes:
sudo cp -r /etc/kubernetes/pki /etc/kubernetes/pki.backup.$(date +%Y%m%d_%H%M%S)Also backup your kubeconfig:
cp ~/.kube/config ~/.kube/config.backup.$(date +%Y%m%d_%H%M%S)On each control plane node, use kubeadm to renew all certificates:
sudo kubeadm certs renew allFor multi-control-plane clusters, execute this on every control plane node, not just one.
You can also renew specific certificates:
sudo kubeadm certs renew admin.conf # renew admin kubeconfig
sudo kubeadm certs renew apiserver # renew API server cert
sudo kubeadm certs renew all --dry-run # preview changesAfter renewal, control plane components must be restarted to load the new certificates:
# For kubeadm clusters, restart static pods
sudo mv /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/
sudo mv /etc/kubernetes/manifests/kube-controller-manager.yaml /tmp/
sudo mv /etc/kubernetes/manifests/kube-scheduler.yaml /tmp/
sudo mv /etc/kubernetes/manifests/etcd.yaml /tmp/
sleep 10 # wait for kubelet to stop pods
sudo mv /tmp/kube-apiserver.yaml /etc/kubernetes/manifests/
sudo mv /tmp/kube-controller-manager.yaml /etc/kubernetes/manifests/
sudo mv /tmp/kube-scheduler.yaml /etc/kubernetes/manifests/
sudo mv /tmp/etcd.yaml /etc/kubernetes/manifests/
sleep 30 # wait for kubelet to restart podsOr simply reboot the control plane node.
After renewing control plane certificates, update your local kubectl config:
# Backup your current kubeconfig
cp ~/.kube/config ~/.kube/config.old
# Copy the renewed admin.conf from control plane node
scp user@control-plane-node:/etc/kubernetes/admin.conf ~/.kube/config
# Fix file permissions
chmod 600 ~/.kube/config
# Verify kubectl can connect
kubectl cluster-info
kubectl get nodesFor all users who access the cluster, distribute the renewed admin.conf.
kubeadm automatically renews control plane certificates during cluster upgrades (kubeadm upgrade apply). If you perform cluster upgrades regularly (at least every 12 months), manual renewal is unnecessary. However, kubeadm does NOT automatically renew certificates if you don't upgrade.
Kubelet certificates are handled separately by kubelet's built-in certificate rotation mechanism (rotateCertificates: true in KubeletConfiguration). Kubelet automatically requests a new certificate when the current one is nearing expiration. However, this only works if the control plane is reachable.
In multi-control-plane clusters, you must run kubeadm certs renew on every control plane node. Partial renewal can cause certificate validation mismatches between nodes, breaking etcd quorum.
For Kind (Kubernetes in Docker), certificates are inside the container. You must exec into the Kind container to run renewal.
MicroK8s and K3s use different certificate management. For MicroK8s, use sudo microk8s.refresh-certs. For K3s, use k3s certificate rotate-ca.
If the root CA certificate itself expires, all leaf certificates become invalid. This is rare (CA certs default to 10 years) but requires regenerating all certificates.
Set up proactive monitoring to alert before certificates expire. Run kubeadm certs check-expiration monthly and alert if any certificate expires within 30 days.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes