I'm wondering if we should consider kubernetes letting us have spec.securityCont . . .

Droogles:
I’m wondering if we should consider kubernetes letting us have spec.securityContext in a pod definition file twice a bug. This has bit me in every CK course so far. I will do kubectl get pod <pod> -o yaml > <pod>.yaml and then add a security context under spec which creates the pod successfully, but without the security context, because there’s another instance of securityContext lower down in the spec.

The pod running containers as root:

controlplane $ kubectl exec -it ubuntu-sleeper -- whoami
root

Then get yaml for pod:

controlplane $ kubectl get pod ubuntu-sleeper -o yaml > ubuntu-sleeper.yaml

For example:

  uid: f3079738-179a-4330-adae-431850280f0d
spec:
  securityContext:
    runAsUser: 1010
  containers:
  - command:
    - sleep
    - "4800"

lower in the file:

  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default

Then apply with two .spec/securityContext:

controlplane $ kubectl apply -f ubuntu-sleeper.yaml 
pod/ubuntu-sleeper configured

Then exec command again:

controlplane $ kubectl exec -it ubuntu-sleeper -- whoami
root

Check the file again and confirm two .spec/securityContext:

controlplane $ cat ubuntu-sleeper.yaml | grep -C2 -i securitycon

--
  uid: 412921a8-f6a4-41e6-934e-a64ad4170de2
spec:
  securityContext:
    runAsUser: 1000
  containers:
--
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default

Remove the lower securityContext: {}:

controlplane $ kubectl apply -f ubuntu-sleeper.yaml 
pod/ubuntu-sleeper created
controlplane $ kubectl exec -it ubuntu-sleeper -- whoami
whoami: cannot find name for user ID 1000
command terminated with exit code 1

Seems like kubernetes shouldn’t allow two of the same nodes/keys. Thoughts?

Mumshad Mannambeth:
Hi David! I understand your concern. However this isn’t a bug or issue with Kubernetes. Rather it’s got to do with YAML. The 2 securityContext specifications do not even reach the Kubernetes controllers. As soon as the YAML is read and processed, only 1 of it remains. The same is applicable to all other fields in YAML. If you specify restartPolicy twice or serviceAccount twice only one will be considered. You only face this issue with securityContext because its hidden way below and goes unnoticed. And I have seen multiple people struggle with it. I must add a note for this lab.

Probably some kind of YAML linter can help catch these. If you use an IDE to edit files then it will highlight these issues.

Droogles:
Thanks. I always catch it because I validate my work and see there is in fact no securityContext set after kubernetes is done with it. Just out of curiosity, what parses the YAML and changes it to JSON before it hits the API? Is this mechanism not part of kubernetes?

Droogles:
Per the spec, YAML should enforce unique keys: https://yaml.org/spec/1.2/spec.html#id2764044|https://yaml.org/spec/1.2/spec.html#id2764044

Droogles:
Actually it does seem to be an issue with kubernetes that’s been raised: https://github.com/kubernetes/kubernetes/issues/14791|https://github.com/kubernetes/kubernetes/issues/14791

Droogles:
Looks like the concern with switching to UnmarshalStrict could cause regressions.