The "AppArmor profile error" occurs when a Kubernetes Pod references an AppArmor profile that is not loaded on the node. This error is caused by missing profile definitions, profile loading failures, or API version mismatches between how the profile is specified and what the cluster supports.
When Kubernetes attempts to enforce an AppArmor security profile on a container, it requires that profile to be pre-loaded on the node's kernel. If the profile name doesn't exist or hasn't been loaded, the kubelet rejects the Pod and prevents it from starting. AppArmor is a Linux kernel security module that restricts programs' capabilities at the OS level. Kubernetes integrates with AppArmor to confine container processes, but unlike other security mechanisms, AppArmor profiles are not automatically deployed—they must exist on every node where Pods using them will run. The error manifests differently depending on your Kubernetes version: prior to v1.30, AppArmor was specified via Pod annotations and would fail with "Cannot enforce AppArmor"; in v1.30+, it's specified in securityContext fields using the appArmorProfile API.
First, identify which AppArmor specification method your cluster uses:
kubectl version --shortKubernetes v1.30+ uses securityContext.appArmorProfile:
containers:
- name: app
securityContext:
appArmorProfile:
type: Localhost
localhostProfile: my-profilePre-v1.30 used Pod annotations (now deprecated):
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/app: localhost/my-profileIf upgrading from v1.29 to v1.30+, update your manifests to use the new securityContext format.
SSH into the node and check if AppArmor is active:
sudo systemctl status apparmor
sudo aa-status # Lists loaded profilesIf AppArmor is not running:
sudo systemctl start apparmor
sudo systemctl enable apparmor # Enable on bootVerify the module is loaded in the kernel:
ls -la /sys/kernel/security/apparmor/If AppArmor is not available, enable it via kernel boot parameters (requires node restart):
# Edit GRUB config (Ubuntu/Debian)
sudo nano /etc/default/grub
# Find GRUB_CMDLINE_LINUX and add apparmor=1 security=apparmor
# Run: sudo update-grub && sudo rebootView all loaded AppArmor profiles on the node:
sudo aa-status | grep -i profile
sudo cat /sys/kernel/security/apparmor/profilesCheck if your profile file exists:
sudo ls -la /etc/apparmor.d/
sudo cat /etc/apparmor.d/my-profile # View the profile definitionIf the profile file exists but isn't loaded, manually load it:
sudo apparmor_parser -r /etc/apparmor.d/my-profile
sudo aa-status | grep my-profileIf it fails to load, there's a syntax error in the profile. Check logs:
sudo dmesg | tail -20
sudo journalctl -xe | grep -i apparmorIf the profile doesn't exist, create a basic one. On each node:
sudo cat > /etc/apparmor.d/k8s-test-profile << 'EOF'
#include <tunables/global>
profile k8s-test-profile flags=(attach_disconnected) {
#include <abstractions/base>
/bin/sh rix,
/bin/bash rix,
/bin/ls rix,
/bin/cat rix,
/bin/echo rix,
/dev/null rw,
/dev/zero rw,
/dev/full rw,
/dev/random r,
/dev/urandom r,
}
EOFLoad it:
sudo apparmor_parser -r /etc/apparmor.d/k8s-test-profile
sudo aa-status | grep k8s-test-profileNow reference it in your Pod:
containers:
- name: app
securityContext:
appArmorProfile:
type: Localhost
localhostProfile: k8s-test-profileFor cluster-wide consistency, deploy a DaemonSet that loads profiles on every node. Create a script directory with your profiles:
# profiles/k8s-apparmor-profile
#include <tunables/global>
profile k8s-apparmor-profile flags=(attach_disconnected) {
#include <abstractions/base>
...
}Create a DaemonSet:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: apparmor-loader
namespace: kube-system
spec:
selector:
matchLabels:
name: apparmor-loader
template:
metadata:
labels:
name: apparmor-loader
spec:
hostNetwork: true
hostPID: true
containers:
- name: loader
image: ubuntu:22.04
securityContext:
privileged: true
volumeMounts:
- name: profiles
mountPath: /etc/apparmor.d
readOnly: true
- name: sys
mountPath: /sys
command:
- /bin/bash
- -c
- |
apt-get update && apt-get install -y apparmor-utils
for profile in /etc/apparmor.d/*; do
apparmor_parser -r "$profile" || echo "Failed to load $profile"
done
sleep 1000000
volumes:
- name: profiles
configMap:
name: apparmor-profiles
- name: sys
hostPath:
path: /sysAfter ensuring the profile exists on the node, deploy a test Pod:
apiVersion: v1
kind: Pod
metadata:
name: apparmor-test
spec:
containers:
- name: app
image: busybox
securityContext:
appArmorProfile:
type: Localhost
localhostProfile: k8s-test-profile
command: ["sleep", "3600"]Apply and check status:
kubectl apply -f apparmor-test.yaml
kubectl describe pod apparmor-test
kubectl logs apparmor-test # Check for errorsIf successful, the Pod will run with the AppArmor profile enforced. If it fails, check events:
kubectl get events --sort-by='.lastTimestamp' | grep apparmorIf the Pod still can't load the profile, check system logs on the node:
# SSH into the node
ssh <node-ip>
# Check kubelet logs
sudo journalctl -u kubelet -n 50 | grep -i apparmor
# Check AppArmor denied messages
sudo dmesg | grep -i apparmor
sudo tail -f /var/log/audit/audit.log | grep apparmor
# Check if profile syntax is invalid
sudo apparmor_parser -d /etc/apparmor.d/my-profile 2>&1Common issues:
- Profile has syntax errors: Fix the profile definition
- Profile name in Pod doesn't match loaded profile: Ensure exact spelling match
- AppArmor module not loaded: Load via sudo modprobe apparmor
- SELinux conflicts (on some distros): Disable SELinux or set to permissive mode
If you can't get custom profiles working, use the runtime's default profile:
containers:
- name: app
securityContext:
appArmorProfile:
type: RuntimeDefaultOr in older annotation format:
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/app: runtime/defaultRuntimeDefault applies the container runtime's built-in security policy (usually permissive). This is less restrictive than custom profiles but still provides baseline security. Once your cluster infrastructure is set up to distribute custom profiles via DaemonSet, you can switch to Localhost profiles.
AppArmor profile management in Kubernetes requires cluster-level coordination. For production clusters, use a profile distribution tool like the Kubernetes Security Profiles Operator (kube-spo) which simplifies profile deployment. If your nodes run different distributions (Ubuntu, Debian, RHEL), profile syntax may vary slightly—test thoroughly. Rootless Kubernetes clusters may have different AppArmor constraints due to unprivileged namespace limitations. In multi-cluster setups, maintain profile consistency via IaC tools (Terraform, Ansible) to avoid cross-cluster AppArmor mismatches. For CI/CD pipelines, ensure your container build process doesn't strip required AppArmor capabilities from the image. AppArmor profiles are node-local and not portable—the same profile must be loaded identically on every node where Pods using it will run. Consider using Pod node affinity to ensure Pods with AppArmor requirements run only on nodes with those profiles loaded.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm