A container running as a non-root user cannot access a volume because it's owned by root or a different user. Fix by adding fsGroup to the pod's securityContext to grant group-based access, or use init containers to change permissions before the main container starts.
When Kubernetes mounts a volume into a container, file ownership on the volume defaults to the host's root user. If a container runs as a non-root user (via runAsUser in securityContext), it lacks read/write permissions on the mounted files and directories. Kubernetes provides fsGroup as a solution: specifying fsGroup in the pod securityContext tells Kubernetes to automatically change the volume's group ownership to that GID, granting access to containers running as that group. This is a fundamental Linux permissions issue applied to Kubernetes storage.
Edit your pod or Deployment manifest to add fsGroup. Kubernetes will automatically change the volume's group ownership, allowing non-root containers to access it:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
securityContext:
runAsUser: 1000 # Non-root user ID
fsGroup: 2000 # Supplemental group ID for volume access
containers:
- name: app
image: my-image
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvcApply the change:
kubectl apply -f pod.yamlKubernetes will automatically change the volume's group to GID 2000, allowing the user (UID 1000) to read/write files.
By default, Kubernetes recursively changes permissions on every file in the volume (slow for large volumes). Use fsGroupChangePolicy to skip this if the root directory already has correct permissions (Kubernetes 1.20+):
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch" # Only change if root dir doesn't match
containers:
- name: app
image: my-image
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvcFor volumes with millions of files, this can reduce startup time from 10+ minutes to seconds.
Make the container's primary group match the fsGroup:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000 # Primary group ID
fsGroup: 1000 # Must match runAsGroup
containers:
- name: app
image: my-image
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvcThis simplifies debugging: files on the volume are owned by UID:GID 1000:1000, and the container runs as the same user.
Some volume types (NFS, HostPath) don't support automatic permission changes. Use an init container with elevated privileges to fix permissions:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
initContainers:
- name: fix-permissions
image: busybox
command: ['sh', '-c', 'chown -R 1000:2000 /data']
volumeMounts:
- name: data
mountPath: /data
containers:
- name: app
image: my-image
securityContext:
runAsUser: 1000
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
nfs:
server: 192.168.1.100
path: "/exports/data"The init container runs as root (by default) and changes ownership before the main container starts.
Check actual ownership to confirm the fix:
# Inside a running container
kubectl exec -it <pod-name> -- ls -la /data
# Expected output for the examples above:
# drwxr-xr-x 1000 2000 (or 1000 1000 if fsGroup=1000)
# Check from outside (if host access available)
kubectl debug -it <pod-name> --image=busybox
ls -la /dataFiles should be owned by the fsGroup, and the running user should have read/write permissions.
If workers have SELinux enforced, additional configuration is needed:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
seLinuxOptions:
level: "s0:c123,c456"
containers:
- name: app
image: my-image
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvcNote: HostPath and NFS don't support SELinux relabeling through seLinuxOptions. For these, configure SELinux policy on the NFS server or mount with -o context=... options.
fsGroup is fully supported on most volume types (emptyDir, configMap, secret, downwardAPI, projected, and cloud provider volumes like EBS). NFS and HostPath volumes don't support Kubernetes-managed permission changes; configure NFS host exports with matching GIDs or use init containers instead. CSI drivers must declare VOLUME_MOUNT_GROUP capability to support fsGroup; check driver documentation. Kubernetes 1.20+ fsGroupChangePolicy "OnRootMismatch" significantly speeds up large volume mounts by skipping recursive chown if the root directory already has correct permissions. For Azure File Storage, permission management is handled by the storage account, not Kubernetes; volume mount issues there usually require different fixes. SELinux is complex; if using SELinux on workers, consult your SELinux policy team before mounting volumes into restricted contexts. Performance tip: For very large volumes (100k+ files), test fsGroupChangePolicy behavior in non-production first, as behavior varies by storage backend.
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes