AKS Defender errors occur when Microsoft Defender for Containers fails to deploy, authenticate, or send security data to Azure. Common causes include disabled feature flags, network connectivity issues, cgroup v2 incompatibility, and invalid cluster configurations.
Microsoft Defender for Containers is a cloud-native Kubernetes security service that provides workload protection, vulnerability scanning, and runtime threat detection for Azure Kubernetes Service (AKS) clusters. When an "AKS Defender error" occurs, it indicates the Defender agent or sensor components cannot initialize, authenticate with Azure, or communicate with the Defender for Cloud backend. This error blocks security monitoring and prevents Defender from scanning for vulnerabilities or detecting runtime threats. Defender errors can manifest in multiple ways: feature enablement failures on cluster creation, daemonset pod failures after cluster is running, connectivity timeouts in private clusters, or compatibility issues with specific Kubernetes versions. Each requires a different troubleshooting approach.
The most common cause is the Defender preview feature not being enabled:
# Register the feature (requires Owner role on subscription)
az feature register --namespace Microsoft.ContainerService --name AKS-AzureDefender
# Wait for it to show as "Registered"
az feature list --output table --query "[?contains(name, 'AKS-AzureDefender')]"
# Once Registered, refresh provider
az provider register --namespace Microsoft.ContainerServiceWait 5-10 minutes for propagation. If still seeing errors, your subscription may not have preview access—contact Azure support.
Once the feature is registered, enable Defender on your cluster:
# Update cluster with Defender enabled
az aks update \
--resource-group <resource-group-name> \
--name <cluster-name> \
--enable-defender
# Verify it was applied
az aks show \
--resource-group <resource-group-name> \
--name <cluster-name> \
--query "securityProfile.defender"The cluster control plane will update (takes 5-10 minutes). Monitor with az aks show to see when it completes.
Check that Defender components deployed successfully:
# Get kubeconfig
az aks get-credentials \
--resource-group <resource-group-name> \
--name <cluster-name>
# Check Defender pods
kubectl get pods -n kube-system | grep microsoft-defender
kubectl get pods -n kube-system | grep defender
# All pods should be in Running state
kubectl get pods -n kube-system -l app=microsoft-defender-publisher
kubectl get pods -n kube-system -l k8s-app=microsoft-defenderIf pods are in Pending or CrashLoopBackOff, check logs:
kubectl logs -n kube-system -l app=microsoft-defender-publisher --tail=100
kubectl describe pod -n kube-system <pod-name> # Check eventsNewer AKS clusters use cgroup v2 (systemd). Older Defender versions expect cgroup v1 file paths:
# Check cgroup version on cluster nodes
kubectl debug node/<node-name> -it --image=ubuntu
# Inside container:
ls /sys/fs/cgroup/memory/memory.usage_in_bytes
# If not found, you're on cgroup v2Solution: Update AKS to latest patch version:
az aks upgrade \
--resource-group <resource-group-name> \
--name <cluster-name> \
--kubernetes-version <latest-version>Or disable Defender until your version supports cgroup v2.
For private AKS clusters, ensure outbound access to Defender endpoints:
# From a pod in the cluster, test connectivity:
kubectl run -it debug --image=curlimages/curl --restart=Never -- /bin/sh
# Test required endpoints
curl -I https://dc.services.visualstudio.com # Application Insights
curl -I https://scadvisorcontent.blob.core.windows.net # Container images
curl -I https://prod.oms.opinsights.azure.com # Log Analytics
kubectl delete pod debugIf connections fail, add a NAT Gateway to your cluster subnet:
# Create NAT Gateway (if not already created)
az network public-ip create \
--resource-group <resource-group-name> \
--name <pip-name>
az network nat gateway create \
--resource-group <resource-group-name> \
--name <nat-gateway-name> \
--public-ip-address-ids <pip-id>
# Associate with subnet
az network vnet subnet update \
--resource-group <resource-group-name> \
--vnet-name <vnet-name> \
--name <subnet-name> \
--nat-gateway <nat-gateway-name>If Defender pods report "Failed to register certificate with TLS12":
# Check Defender pod logs
kubectl logs -n kube-system -l app=microsoft-defender-publisher | grep -i certificateFor private clusters with DNS issues:
# Verify DNS resolution from cluster
kubectl run -it debug --image=ubuntu --restart=Never -- /bin/bash
apt-get update && apt-get install -y dnsutils
nslookup dc.services.visualstudio.com
exit
kubectl delete pod debugIf DNS fails, configure custom DNS in cluster subnet or add DNS servers to node VMs:
az aks update \
--resource-group <resource-group-name> \
--name <cluster-name> \
--dns-service-ip <custom-dns-ip>Defender needs a working Log Analytics workspace to function:
# Get cluster's workspace ID
az aks show \
--resource-group <resource-group-name> \
--name <cluster-name> \
--query "addonProfiles.omsagent.config.logAnalyticsWorkspaceResourceId"
# Verify workspace exists and has permissions
az monitor log-analytics workspace show \
--resource-id <workspace-resource-id>
# Check if workspace was deleted
az monitor log-analytics workspace list \
--resource-group <resource-group-name>If workspace was deleted, enable monitoring with a new workspace:
az aks enable-addons \
--resource-group <resource-group-name> \
--name <cluster-name> \
--addons monitoringIf all else fails, reset Defender by disabling and re-enabling:
# Disable Defender
az aks update \
--resource-group <resource-group-name> \
--name <cluster-name> \
--disable-defender
# Wait 2-3 minutes for cleanup
sleep 180
# Re-enable Defender
az aks update \
--resource-group <resource-group-name> \
--name <cluster-name> \
--enable-defender
# Verify
kubectl get pods -n kube-system | grep defenderThis forces a fresh deployment of all Defender components, often resolving lingering pod state issues.
Defender for Containers is a paid feature—verify your subscription includes the "Microsoft Defender for Cloud" plan. In multi-cluster scenarios, you can enable Defender per-cluster; there is no cluster-wide toggle. For GitOps deployments (ArgoCD, Flux), Defender cannot be managed declaratively; always use Azure CLI or portal. In CI/CD pipelines, the service principal must have "Contributor" role on the resource group to enable Defender. Windows nodes in AKS also require Defender enablement (same commands apply). Defender relies on system-critical pods (microsoft-defender-publisher, microsoft-defender-collector) running in kube-system—do not delete or modify these pods manually. For compliance/audit purposes, monitor Defender alerts in Azure Security Center dashboard under "Container registries" and "Kubernetes clusters". If you need to exclude specific workloads from Defender scanning, use Kubernetes namespace labels and Azure Policy exemptions rather than disabling Defender entirely. Consider using Azure Policy to enforce Defender enablement across all AKS clusters in your organization.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm