The "Unauthorized" error in Kubernetes indicates that authentication to the API server has failed, usually because your credentials are missing, invalid, or expired. This is different from "Forbidden" which means you're authenticated but lack permissions.
An Unauthorized error (HTTP 401) from Kubernetes means the API server rejected your credentials. This happens at the authentication layer, before permission checks. Common causes include expired tokens, invalid certificate data in kubeconfig, missing IAM credentials on cloud platforms, or stale service account tokens. The error prevents you from communicating with the cluster at all—no kubectl commands work. Unlike Forbidden (403), which means you're logged in but don't have permission for a specific action, Unauthorized means the server doesn't even know who you are.
Run the appropriate command for your platform:
AWS EKS:
aws eks update-kubeconfig --name <cluster-name> --region <region>Azure AKS:
az aks get-credentials --name <cluster> --resource-group <group>GCP GKE:
gcloud container clusters get-credentials <cluster> --zone <zone>This regenerates ~/.kube/config with fresh credentials. Test with kubectl get nodes.
Run kubectl config current-context to see what cluster you're connected to. Run kubectl config view to inspect your entire kubeconfig. Look for:
- Valid server URL (not localhost unless using local cluster)
- Non-empty certificate-data or certificate file path
- Valid client-key-data or client-key file
- Correct user and context references
If any values are empty, incomplete, or point to missing files, that's your issue.
Check certificate expiration:
openssl x509 -in ~/.kube/<cert-file> -text -noout | grep -A2 "Validity"If certificates are expired, regenerate kubeconfig:
- Local clusters (Minikube/Docker Desktop): Delete cluster and recreate
- Cloud clusters: Use the refresh command from Step 1
- Self-managed: Regenerate certificates on cluster control plane (check cluster docs)
For EKS clusters, the IAM user creating the cluster gets admin automatically. Other users need entries in aws-auth:
kubectl edit -n kube-system configmap/aws-authAdd your IAM user/role under mapUsers or mapRoles:
mapUsers: |
- userarn: arn:aws:iam::ACCOUNT:user/USERNAME
username: USERNAME
groups:
- system:masters
mapRoles: |
- rolearn: arn:aws:iam::ACCOUNT:role/ROLENAME
username: ROLENAME
groups:
- system:mastersSave and test: kubectl get nodes. Verify IAM credentials: aws sts get-caller-identity.
Check which AWS credentials kubectl is using:
aws sts get-caller-identityThis shows the account, user, and ARN. Verify this matches your EKS cluster account and that the user/role has proper permissions. If credentials are wrong, set them:
export AWS_ACCESS_KEY_ID=<your-key>
export AWS_SECRET_ACCESS_KEY=<your-secret>
export AWS_REGION=<your-region>Or configure via ~/.aws/credentials and run aws configure.
If a pod can't authenticate to the API server, verify the service account token:
kubectl get sa -n <namespace>
kubectl describe sa <sa-name> -n <namespace>Check the secret: kubectl get secret <secret-name> -n <namespace> -o yaml. The token must be base64-encoded. Mount it correctly in your pod:
spec:
serviceAccountName: <sa-name>
containers:
- name: app
volumeMounts:
- name: token
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
volumes:
- name: token
secret:
secretName: <sa-name>-tokenFor long-lived sessions, configure OIDC token refresh in kubeconfig. This prevents token expiration:
apiVersion: v1
kind: Config
users:
- name: my-user
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: aws-iam-authenticator
args:
- token
- -i
- <cluster-name>For other identity providers (Azure, GCP, etc.), check their documentation for OIDC configuration. This requires cluster-side setup—ask your cluster administrator.
On AWS EKS, the IAM identity that created the cluster automatically gains admin access; other users must be explicitly added to aws-auth ConfigMap. Azure AKS uses Azure AD integration—verify Azure AD user exists and has correct role. GCP GKE uses Google Cloud IAM directly. For CI/CD systems (GitLab, GitHub Actions, Jenkins), service account tokens need explicit secret configuration and proper RBAC bindings. WSL2 environments may have credential caching issues between Windows and Linux—clear caches if credentials were rotated. When rotating IAM credentials, both old and new caches may need clearing: kubectl config unset users.<user-name> and aws sso logout. OpenID Connect providers require both client-side (kubeconfig) and server-side (cluster) configuration for token refresh to work. Certificate-based authentication doesn't auto-refresh—regenerate kubeconfig after certificate renewal.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm