Droogles:
I’m wondering if we should consider kubernetes letting us have spec.securityContext in a pod definition file twice a bug. This has bit me in every CK course so far. I will do kubectl get pod <pod> -o yaml > <pod>.yaml
and then add a security context under spec which creates the pod successfully, but without the security context, because there’s another instance of securityContext
lower down in the spec.
The pod running containers as root:
controlplane $ kubectl exec -it ubuntu-sleeper -- whoami
root
Then get yaml for pod:
controlplane $ kubectl get pod ubuntu-sleeper -o yaml > ubuntu-sleeper.yaml
For example:
uid: f3079738-179a-4330-adae-431850280f0d
spec:
securityContext:
runAsUser: 1010
containers:
- command:
- sleep
- "4800"
lower in the file:
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
Then apply with two .spec/securityContext
:
controlplane $ kubectl apply -f ubuntu-sleeper.yaml
pod/ubuntu-sleeper configured
Then exec command again:
controlplane $ kubectl exec -it ubuntu-sleeper -- whoami
root
Check the file again and confirm two .spec/securityContext
:
controlplane $ cat ubuntu-sleeper.yaml | grep -C2 -i securitycon
--
uid: 412921a8-f6a4-41e6-934e-a64ad4170de2
spec:
securityContext:
runAsUser: 1000
containers:
--
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
Remove the lower securityContext: {}
:
controlplane $ kubectl apply -f ubuntu-sleeper.yaml
pod/ubuntu-sleeper created
controlplane $ kubectl exec -it ubuntu-sleeper -- whoami
whoami: cannot find name for user ID 1000
command terminated with exit code 1
Seems like kubernetes shouldn’t allow two of the same nodes/keys. Thoughts?