The "Job completions is immutable" error occurs when you attempt to modify the `.spec.completions` field on an already-created Kubernetes Job. This field specifies required pod completions and cannot be changed after creation.
Kubernetes prevents modifying the `.spec.completions` field (and several other Job fields) after a Job is created. The `completions` field defines how many pod completions are needed for the Job to succeed. Once set at creation time, this field is locked to prevent mid-execution changes that could cause unpredictable behavior. This is an intentional design decision: if you could change the completions mid-run, pods that already completed would contradict new requirements, and the Job controller would face ambiguous completion state. Therefore, Kubernetes enforces immutability on critical Job fields.
Immutable Job fields (cannot be changed after creation):
- .spec.completions - number of required completions
- .spec.template - pod template specification
- .spec.selector - label selector for pod tracking
- .spec.parallelism - may be restricted based on completion mode
Mutable fields:
- .spec.parallelism - number of parallel pods (in certain conditions)
- .spec.activeDeadlineSeconds - deadline to stop running Job
If you need to change immutable fields, delete and recreate.
Remove the Job to allow recreation:
kubectl delete job <job-name>Wait for deletion to complete:
kubectl delete job <job-name> --wait=trueVerify it's gone:
kubectl get job <job-name>
# Should return: No resources foundOnce deleted, apply the new Job definition:
kubectl apply -f updated-job.yamlVerify creation:
kubectl get job <job-name>
kubectl describe job <job-name>Watch completion status:
kubectl get jobs --watchTo avoid this issue, finalize Job specifications before initial creation:
apiVersion: batch/v1
kind: Job
metadata:
name: myapp-job
spec:
completions: 3 # Set this value correctly before creating
parallelism: 1
template:
metadata:
labels:
app: myapp-job
spec:
containers:
- name: job-container
image: myapp:latest
restartPolicy: NeverTest in development first before applying to production.
For automated deployments, script the delete-and-recreate pattern:
#!/bin/bash
JOB_NAME="myapp-job"
NAMESPACE="default"
# Delete if exists
kubectl delete job $JOB_NAME -n $NAMESPACE --ignore-not-found=true
# Wait for deletion
sleep 5
# Apply new Job
kubectl apply -f job.yaml -n $NAMESPACEThis ensures clean state before Job creation.
In ArgoCD or Flux configurations, Jobs must be handled specially since they're immutable:
Option 1: Manual delete before update
Add a pre-sync hook in ArgoCD:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp
spec:
# ...
syncPolicy:
syncOptions:
- CreateNamespace=true
- RespectIgnoreDifferences=true
sync:
- kind: Job
operation: delete # Delete before syncOption 2: Use CronJob instead
For recurring jobs, use CronJob which handles cleanup automatically:
apiVersion: batch/v1
kind: CronJob
metadata:
name: myapp-cronjob
spec:
schedule: "0 2 * * *" # Daily at 2 AM
jobTemplate:
spec:
completions: 3
template:
# ... pod templateWhen deleting a Job, control what happens to pods:
# Delete Job and cascading pods immediately
kubectl delete job <job-name>
# Delete Job but keep pods running (use for cleanup inspection)
kubectl delete job <job-name> --cascade=orphan
# Force delete (use with caution)
kubectl delete job <job-name> --grace-period=0 --forceFor Kubernetes manifests, specify deletion policy:
metadata:
ownerReferences:
- apiVersion: batch/v1
kind: Job
name: myapp-job
uid: <uid>
blockOwnerDeletion: true
controller: trueIf your Job runs regularly, use CronJob which automatically handles cleanup and recreation:
apiVersion: batch/v1
kind: CronJob
metadata:
name: myapp-cronjob
spec:
schedule: "0 2 * * *" # Cron schedule
successfulJobsHistoryLimit: 3 # Keep last 3 successful Jobs
failedJobsHistoryLimit: 1 # Keep last 1 failed Job
jobTemplate:
spec:
completions: 3
parallelism: 1
template:
spec:
containers:
- name: job
image: myapp:latest
restartPolicy: OnFailureCronJob automatically creates new Job instances, bypassing immutability issues.
Job immutability is by design to prevent race conditions and state inconsistencies. The Job controller relies on spec fields not changing during execution. For dynamic workloads requiring variable parallelism or completions, prefer CronJob for recurring tasks or custom controllers for complex requirements. When debugging Job immutability issues in GitOps workflows, check if ArgoCD/Flux is trying to update the Job in-place rather than replacing it. The --cascade flag controls whether pods are deleted when Job is deleted—orphaned pods must be manually cleaned up. For batch processing pipelines, consider using Argo Workflows or Tekton which handle pod templates more flexibly than native Jobs.
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes